Essay / Essay / AI

AI

AI stands for “artificial intelligence”. It is a branch of computer science that deals with the development of algorithms and computer programs that can perform tasks that normally require human intelligence, such as recognizing speech, identifying objects, making decisions, and learning from experience. AI systems use a combination of techniques such as machine learning, deep learning, natural language processing, and computer vision to simulate human intelligence. The goal of AI is to create machines that can perform complex tasks autonomously and make decisions based on data and algorithms, without human intervention. AI has the potential to revolutionize many industries, from healthcare to transportation, and has become an important area of research and development for many companies and governments around the world.

  • “Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion- fold.” - Ray Kurzweil
  • “The development of full artificial intelligence could spell the end of the human race. It would take off on
  • its own, and re-design itself at an ever-increasing rate.” - Stephen Hawking
  • “Artificial intelligence is the future, not only for Russia but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.” - Vladimir Putin
  • “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” - Eliezer Yudkowsky
  • “The ultimate promise of technology is to make us master of a world that we command by the push of a
  • button.” - Voltaire
  • “AI will not replace lawyers, but lawyers who use AI will replace those who don’t.” - Rohit Talwar
  • “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” - Edsger Dijkstra
  • “I visualize a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.”- Claude Shannon
  • “It’s tempting to imagine a world where AI can do all our work for us, but AI’s current capabilities suggest that it will be far more helpful as an assistant than a replacement.” - Fei-Fei Li
  • “AI is not a silver bullet, but it’s the best tool we have for getting to the next layer of understanding in a world that’s too complex for human cognition alone.” - Chris Nicholson

There is a swift global surge in the domain of Artificial Intelligence (AI), which is bringing forth machines that possess the ability to learn and execute cognitive activities that were previously restricted to humans. Such technological advancements are anticipated to have significant consequences on society and culture.

AI systems are now advising medical practitioners, scientists, and judges, and have become essential in analyzing and interpreting research data. As intelligent technology continues to replace human labor, there is a need for new forms of resilience and flexibility. Renowned public intellectuals such as Stephen Hawking have expressed concerns over the potential of AI to pose an existential threat to humanity by taking control of many aspects of daily life and societal organization.

Brief History of AI

  • The term ‘artificial intelligence’ was introduced in the 1950s for machines that could perform more than just routine
  • With advancements in computing power, the term was expanded to include machines that have the ability to
  • Although there is no single definition of AI, it is generally agreed upon that machines based on AI or ‘cognitive computing’ have the potential to imitate or exceed human cognitive abilities, such as sensing, language interaction, reasoning, analysis, problem-solving, and even
  • These ‘intelligent machines’ are capable of demonstrating human-like learning capabilities through self- correction and self-relation mechanisms, using algorithms that embody ‘machine learning’ or ‘deep ’
  • These algorithms use ‘neural networks’ that mimic the functioning of the human
  • To examine the ethical implications of AI, it is necessary to clarify its possible
  • The term ‘artificial intelligence’ was coined in 1955 by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude E. Shannon.
  • The initial plan for the ‘study of artificial intelligence’ was based on the idea that every aspect of learning or intelligence could be described precisely enough to simulate it with a
  • Over time, as the field developed and diversified, the number of meanings of AI increased, and there is no universally agreed-upon definition
  • Different disciplinary approaches, such as computer science, electrical engineering, robotics, psychology, or philosophy, have various definitions of

AI Today

  • Large multinational tech companies worldwide have started investing heavily in integrating AI into their
  • The increase in computing power has enabled the use of complex algorithms and working with ‘big data’ - massive datasets that can be utilized for machine learning.
  • These companies have almost unlimited computing power and access to data from billions of individuals to use as input for their AI systems.
  • AI is rapidly gaining influence in people’s daily lives and in professional fields such as healthcare, education, scientific research, communications, transportation, security, and art through these

Ethical issues

  • AI has societal and cultural implications that raise issues of freedom of expression, privacy and surveillance, ownership of data, bias and discrimination, manipulation of information and trust, power relations, and environmental
  • AI challenges human cognitive capacities, bringing new challenges to human understanding and
  • Algorithms used in social media and news sites can spread disinformation and have implications for the meaning of facts and truth, political interaction, and
  • Machine learning can embed and exacerbate bias, potentially resulting in inequality, exclusion, and a threat to cultural
  • The power generated by AI technology accentuates the asymmetry between individuals, groups, and nations, including the digital divide within and between
  • Lack of access to fundamental elements such as algorithms, data, human resources, and computational resources may exacerbate the digital

Education

  1. Artificial Intelligence has an impact on the role of education in societies in several ways:
  • The labor displacement caused by some forms of AI requires retraining of employees and rethinking the final qualifications of educational
  • In a world of AI, education should empower citizens to develop critical thinking skills that include ‘algorithm awareness’ and the ability to reflect on the impact of AI on information, knowledge, and decision-making.
  1. The use of AI in education also raises ethical questions, including:
  • The role of AI in the educational process itself, as an element of digital learning environments, educational robotics, and systems for ‘learning analytics’, all of which require responsible development and
  1. Appropriate training for engineers and software developers is necessary to ensure responsible design and implementation of AI.

Societal role of Education

  1. AI is causing concerns regarding labour displacement and the speed of change it brings. This requires retraining of employees and has implications for the career paths of students.
  2. According to a McKinsey panel survey, investing in retraining and upskilling existing workers is seen as an urgent business priority.
  3. The rise of AI urges societies to rethink education and its social
  4. Traditional formal education provided by universities may no longer be sufficient for the digitized economies and AI applications of the 21st century.
  5. Information and knowledge are now omnipresent, requiring not only data literacy but also AI literacy for critical reflection on intelligent computer
  6. Education should enable people to be versatile and resilient in a continuously developing labour market, where employees need to reskill themselves regularly.
  1. Lifelong learning ideas may need to be up-scaled into a model of continuous education with the development of other types of degrees and

AI in Teaching and Learning

  • Open educational resources (OER) have increased the availability of high-quality teaching resources through the internet.
  • OERs have the potential to impact education on a global scale, but this potential has not been fully realized as completion rates for MOOCs remain low.
  • The wide variety and depth of available resources has led to two
  • The first problem is finding the right resource for individual learners or teachers wishing to reuse a resource in their own teaching materials.
  • The second problem is reducing diversity, as some resources become very popular at the expense of other potentially more relevant but less accessible
    • AI challenges existing scientific explanations and theories due to its powerful machine learning and deep learning
    • The conventional view of science requires scientific explanations to be based on causal or unifying understandings that can predict specific
    • AI, on the other hand, can produce impressively accurate predictions without providing a causal or unifying explanation, using algorithms that don't work with the same semantic concepts as
    • This gap between successful predictions and satisfactory scientific understanding may have implications for decision-making based on AI and trust in
    • Machine learning's quality depends heavily on the available data used to train the algorithms, which can pose transparency issues since most AI applications are developed by private
    • The lack of transparency in AI development contrasts with the traditional scientific method that requires replicability to warrant the validity of

AI, Life Science and Health

  • AI technologies have transformed the healthcare and bioethics landscape in both positive and negative
  • Positive effects include more precision in robotic surgery and better care for autistic children, while ethical concerns include costs and
  • The use of internet sites and mobile phone applications for self-diagnosis raises questions about medical authority, acceptance of self-medication, and the doctor-patient
  • AI technologies can free up time for health providers to dedicate to their patients, but they may replace the human elements of
  • AI-based technologies for the elderly, such as assistive social robots, can be useful for medical reasons but may lead to social
  • AI also raises questions about human enhancement and therapy, such as integrating AI with the human brain using a neural interface. This has important implications for what it means to be human and what normal human functioning

AI and Environmental Science

  • AI can have a positive impact on environmental science through various applications, including improving scientific understanding of ecological, biological, and climatic processes, enhancing recycling and energy efficiency, and improving agriculture and farming However, the potential benefits need to be balanced against the environmental impact of the entire AI and IT production cycle, including the generation of electronic waste and the use of rare-earth elements.
  • AI can aid in disaster risk management by predicting and responding to environmental hazards such as floods, droughts, and extreme weather events. For instance, the UNESCO G-WADI Geoserver application, which uses the PERSIANN satellite-based precipitation retrieval algorithm, is being used to inform emergency planning and management of hydrological risks. Google has also contributed to disaster management through its AI-enabled flood forecasting
  • The development of AI technologies that could bring potential benefit for disaster management and environmental protection should be encouraged, even by private companies. However, the potential ethical concerns and environmental impact of these technologies should be considered and

AI and Decision-making

AI methods can have a significant impact in various areas, including the legal professions, judiciary, legislative and administrative public bodies, by increasing efficiency and accuracy in counseling and litigation, and aiding judges in drafting new decisions.

One of the key issues with using algorithms is that their results may not always be intelligible to humans. This problem extends to the wider field of data-driven decision-making, which is becoming more prevalent as AI technology advances. AI engines can process and categorize vast amounts of data in rapidly-evolving contexts, such as environmental monitoring, disaster prediction and response, anticipation of social unrest, and military battlefield planning.

While AI-driven decisions have the potential to be efficient and accurate, their validity should be treated with caution. These decisions may not be fair, just, accurate, or appropriate due to inaccuracies, discriminatory outcomes, embedded or inserted bias, and limitations of the learning process. Humans possess a larger “world view” and tacit knowledge that outperform AI in critical and complex situations. For example, when it comes to battlefield decisions, humans are better equipped to make decisions based on fundamentally different decision-making architectures, including sensitivity to potential bias.

It is highly questionable whether AI will have the capacity, at least in the near future, to cope with ambiguous and rapidly evolving data or interpret and execute human intentions when faced with complex and multifaceted data. Even having a human “in the loop” to moderate a machine decision may not be sufficient to produce a “good” decision, as cognitive AI does not make decisions in the same way as humans. The stochastic behavior of cognitive AI, together with the human’s consequent inability to know why a particular choice has been made by the system, means the choice is less likely to be trusted.

The Allegheny Family Screening Tool (AFST) is a cautionary tale that illustrates some of the problems of using AI to assist decision-making in social contexts. The predictive model used to forecast child neglect and abuse in Allegheny, Pennsylvania, was put in place with the belief that data-driven decisions would provide the promise of objective, unbiased decisions that would solve the problems of public administration with scarce resources. However, recent research has argued that the AFST tool has harmful implications for the population it hoped to serve. It oversamples the poor and uses proxies to understand and predict child abuse in a way that inherently disadvantages poor working families. As a result, it exacerbates existing structural discrimination against the poor and has a disproportionately adverse impact on vulnerable communities.

In some contexts, using AI as a decision maker might be seen as a pact with the devil. In order to take advantage of the speed and large data ingestion and categorization capabilities of an AI engine, we will have to give up the ability to influence that decision. The effects of such decisions can be profound, especially in conflict situations.

Ethical Concerns

Key Area

Concerns

Gender Bias

It is likely that a search for "school girl" would yield a page filled with sexualized images of women and girls in various costumes, while a search for "school boy" would mostly show normal images of young boys without sexualization. These disparities exemplify gender bias in artificial intelligence, which originates from stereotypical representations deeply ingrained in our societies.

AI systems often produce biased results, as search engine technology processes large amounts of data and prioritizes results based on user preferences and location. This can create an echo chamber that perpetuates biases and reinforces prejudices and stereotypes online.

To ensure more equal and accurate results, it is important to minimize gender bias in the development of algorithms, the large data sets used for machine learning, and the use of AI for decision-making. Reporting biased search results can also help address this issue.

The accurate representation of women in search results should avoid sexualization and instead reflect a diverse range of images that accurately depict women and girls in various roles and contexts. By promoting gender equality in AI, we can work towards a more just and equitable society.

AI and Court of Law

The use of AI in judicial systems worldwide is on the rise, prompting the need for an exploration of ethical questions. It is believed that AI could potentially evaluate cases and administer justice more effectively, quickly, and efficiently than a human judge.

The impact of AI could be far-reaching, extending to legal professions, the judiciary, and aiding the decision-making of legislative and administrative public bodies. AI tools can enhance the accuracy and efficiency of lawyers in counselling and litigation, offering benefits to lawyers, clients, and society at large. Judges can complement and improve existing software systems using AI tools to help them draft new decisions. This trend towards increasing reliance on autonomous systems is called the "automatization of justice."

Some suggest that AI can help create a fairer criminal justice system, where machines can assess and weigh relevant factors better than humans, leveraging their speed and capacity to analyze vast amounts of data. This would result in decisions based on informed reasoning, free of bias and subjectivity.

But some ethical challenges are:

    • Lack of transparency of AI tools: AI decisions are not always intelligible to humans.
    • AI is not neutral: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias.
    • Surveillance practices for data gathering and privacy of court users.
    • New concerns for fairness and risk for Human Rights and other fundamental values.

Others

There are several ethical issues surrounding AI that have become increasingly important as the use of AI continues to expand:

Bias and Discrimination: AI systems can perpetuate or even amplify biases and discrimination based on factors like race, gender, and socio-economic status. This can lead to unfair treatment of certain groups, perpetuating existing societal inequalities.

Transparency and Explainability: AI systems often operate in a "black box," making it difficult to understand how they arrive at certain decisions. This lack of transparency can make it difficult to ensure that AI is being used ethically and fairly.

Privacy and Surveillance: AI systems often rely on collecting and analyzing vast amounts of data, raising concerns about privacy and surveillance. There are concerns that AI systems could be used to infringe on individuals' rights to privacy, freedom of speech, and freedom of association.

Accountability: AI systems can operate autonomously, which raises questions about who is accountable when something goes wrong. There may be questions about who is responsible for AI-related accidents, errors, or unintended consequences.

Human Rights: AI systems can have a significant impact on human rights, including the right to equality, the right to privacy, and the right to freedom of expression. It is essential to ensure that the use of AI does not violate these fundamental rights.

Employment Displacement: The widespread adoption of AI systems may lead to significant job losses, particularly in industries where AI can perform tasks more efficiently and cost-effectively than humans.

Safety and Security: There are concerns about the safety and security of AI systems, particularly if they are used to control critical infrastructure or weapons systems. There are also concerns that AI systems could be hacked or manipulated to cause harm.

Addressing these ethical issues is crucial to ensure that AI is used in a way that is fair, just, and respects fundamental human rights.

AI contributes to widening existing gender gaps

Only 22 % of all AI professionals are women. Because they are underrepresented in the industry, gender biases and stereotyping are being reproduced in AI technologies. It is not a coincidence that virtual personal assistants such as Siri, Alexa or Cortana are “female” by default. The servility and sometimes submissiveness they express are an example of how AI can (continue to) reinforce and spread gender bias in our societies.

AI can be a powerful tool to address climate change and environmental issues

As the planet continues to warm, climate change impacts are worsening. By gathering and analysing data, AI-powered models could, for example, help to improve ecosystem management and habitat restoration, essential to diminish the decline of fish and wildlife populations. That said, data extraction consumes nearly 10

% of energy globally. So, it is also essential to address the high energy consumption of AI and the consequential impact on carbon emission.

AI cannot be a no law zone

AI is already in our lives, directing our choices, often in ways which can be harmful. There are some legislative vacuums around the industry which need to be filled fast. The first step is to agree on exactly which values

need to be enshrined, and which rules need to be enforced. Many frameworks and guidelines exist, but they are implemented unevenly, and none are truly global. AI is global, which is why we need a global instrument to regulate it.

AI during CO VID 19

AI played a crucial role during the COVID-19 pandemic in several ways, including:

  • Diagnosis and Screening: AI systems were used to develop and improve COVID-19 diagnostic tools, including automated screening algorithms that can analyze CT scans and X-rays to detect COVID-19 AI systems such as deep learning algorithms were used to analyze chest X-rays and CT scans of COVID-19 patients to identify patterns and features indicative of COVID-19 pneumonia. For instance, researchers at the University of Montreal developed an AI-based system that can detect COVID-19 in chest X-rays with 90% accuracy.
  • Drug and Vaccine Development: AI was used to accelerate the development of COVID-19 vaccines and AI was used to screen large databases of existing drugs to identify potential treatments, predict which drugs would be effective, and optimize the design of clinical trials. For example, BenevolentAI, a UK-based AI drug discovery company, used its platform to identify an existing drug called Baricitinib as a potential treatment for COVID-19. Similarly, Pfizer used AI to identify four potential COVID-19 vaccine candidates.

Contact Tracing: AI was used to develop and improve contact tracing systems, which can help track the spread of the virus and identify potential outbreaks. AI was used to analyze data from various sources, such as mobile phone location data, to identify potential contacts and alert individuals who may have been exposed to the virus. For instance, Singapore’s TraceTogether app uses Bluetooth signals to detect nearby phones and records them as “encounters.” The app can then quickly identify potential contacts of infected individuals and notify them to take precautions.

Resource Allocation: AI was used to help hospitals and healthcare systems allocate resources more effectively during the pandemic. AI was used to predict patient demand, optimize bed and staff assignments, and prioritize the allocation of personal protective equipment (PPE). For example, researchers at the University of Pennsylvania developed an AI-powered tool that can predict how many patients with COVID-19 will need hospitalization and intensive care, helping hospitals to allocate resources more efficiently.

Monitoring and Surveillance: AI was used to monitor the spread of the virus and predict future outbreaks. AI was used to analyze social media data, news reports, and other sources of information to identify potential outbreaks and predict the spread of the virus. For example, BlueDot, a Canadian company that uses AI to track infectious disease outbreaks, alerted its clients to the COVID-19 outbreak in Wuhan, China, nine days before the World Health Organization issued a warning.

Have questions about a course or test series?

unread messages    ?   
Ask an Expert

Enquiry

Help us make sure you are you through an OTP:

Please enter correct Name

Please authenticate via OTP

Resend OTP
Please enter correct mobile number
Please enter OTP

Please enter correct Name
Resend OTP
Please enter correct mobile number

OTP has been sent.

Please enter OTP