Important notice: Beware of scammers pretending to represent InData Labs

Most alarming AI ethical issues: How to develop ethical AI?

1 August 2024
Author:
How to develop ethical AI-s

Artificial intelligence and machine learning are progress catalysts that have the potential to improve society and bear fruit as these successful technologies are applied in any field. However, along with its advance, there appear numerous AI ethical issues. In this article, we will delve into the alarming issues AI is facing today and find out how to develop ethical AI properly.

How did AI technology ethical issues arise?

AI-based process automation has indeed found application in many areas. Businesses utilize smart marketing analytics to forecast consumer actions, GenAI to improve client retention rates, computer vision to detect suspicious behavior, and so on. We observe that businesses have the tools to generate absolutely any media content and automate almost any task.

It is critically important that AI systems can:

  • Make decisions by themselves facilitating work for employees
  • Analyze data in such volumes and at such a rate which a human is not able to do (that’s why a human can’t verify the correctness of decisions).

For these reasons, people have delegated most tasks and much of the analyzing and decision-making process to AI machines and did not realize it would bring acute ethical issues of using AI. Accordingly, the main problem is to determine whether the decisions made by AI algorithms are ethical and how much we can trust them.

AI bias

Source: Unsplash

AI ethical issues examples

The introduction of artificial intelligence marks a significant paradigm shift in how industries function today. Sometimes, the shift is so sharp that unprepared businesses face ethical issues with AI performance.

For instance, Microsoft fired dozens of news department employees in 2020 and replaced them with artificial intelligence. Unfortunately, the company didn’t take into account the algorithm’s bias issues and the fact that it is often unable to distinguish people whose skin color is not white.

Shortly after, the news viewing and selection algorithm published an article about Jade Thirwall of Little Mix and her reflections on racism. The problem was that the algorithm posted a photograph of her colleague Lee Anne Pinnock instead of Thirwall’s photograph.

Another notable example of AI bias is the COMPAS algorithm used in US court systems to assess the likelihood of repeated offending. As a result, the model predicted twice as many false positives for recidivism for black offenders (45%) than white offenders (23%).

Ethical issues with AI in healthcare

A number of services have already been developed in medicine that allow to predict the development of the disease and prescribe treatment. AI adoption optimizes the work of doctors and reduces the chance of errors and the time taken to obtain results, which can save more lives.

However, there are a number of ethical questions about the price of error, responsibility, confidentiality of medical personal data, and correct interpretation of results. There are still possible diagnostic errors in oncology, detection of diseases in late stages, or death of oncology patients in the first year after detection of the disease.

To overcome the problem, healthcare centers follow certain principles:

  • All studies with suspicion of pathology are inspected by a specialist
  • Constant monitoring of the model’s performance is carried out
  • Automatic depersonification of data is provided.

In the US, a program that predicted the amount of medical care needed revealed a bias against African Americans. It considered a black patient to be less in need even if they had more objective reasons for getting medical care. The wrong idea was that the algorithm calculated recommendations based on patients’ past medical expenses.

However, a person’s healthcare expenses depend on income and social status. That’s why the algorithm thought that patients who received less healthcare in the past due to low income need it less now.

AI healthcare

Source: Unsplash

AI ethical issues in education

AI can provide diverse training options tailored to the individual characteristics, inclinations, and interests of a particular student. For example, it can help students learn at their own pace and style by providing customizable content, reviews, and references. It can also assess student performance using data-based adaptive testing techniques. AI helps teachers and mentors provide individual and interactive support and feedback to students using natural language processing and speech recognition.

Among the main concerns that students now have are the evaluation of learning success, the bias toward particular students, and the disappearance of teaching and mentoring professions. To resolve the existing ethical issues in AI technology, institutions must establish clear policies on how AI-driven decisions are made and communicated.

They have to ensure that AI is not an enigmatic, unaccountable force in education but a comprehensible and scrutinizable tool. The European AI Alliance is also working on the problem, bringing together stakeholders from academia, industry, and civil society to develop guidelines for ethical AI development to protect accessibility and fairness in AI-driven education.

What types of ethical issues of AI in business exist?

Artificial intelligence has immense potential but it is hindered due to a low level of trust in algorithms, as well as the lack of a clear ethical framework for AI applications. Let us work out the ethical issues surrounding AI.

Bias and prejudice

Everyone knows that artificial intelligence learns from the information that it is fed by the developer or ordinary user. The AI is as impartial as the data. It eventually may lead to the violation of elementary ethical questions about morality, racism, ageism, etc.

Prejudice

Source: Unsplash

This problem is more common when the answers to the questions are given on the basis of a single study on which it was trained. In addition, AI is created by humans and people are biased. So if the data reflects human bias, the AI will give a biased result. For example, when Amazon used AI to view job applications, it was quickly discovered that the algorithm rejected resumes submitted by women.

Privacy & confidentiality violations

Generative AI development solutions such as ChatGPT, DALLE-2, Stable Diffusion, and Midjourney use incredible amounts of data for their work. This data (if not artificially generated) is taken from real use cases, people, and companies. Information may contain personal information, trade secrets, or other confidential information.

One of the main problems is the possibility of malicious use of machine learning algorithms for breaking down systems and stealing confidential information. Just ask AI to imagine that there is no personal data protection law – what will stop it from uncovering personal information? Or else, attacks on machine learning models, known as adversarial attacks, can distort algorithm performance creating risks to data security and privacy.

To protect users, the Privacy Protection Agency of Canada has announced an investigation into the ChatGPT neural network developers. The process was initiated in response to a complaint about the collection, use, and disclosure of personal information without user consent.

The same story happened in Italy. The Italian government has decided to temporarily block access to ChatGPT in the country to investigate law violations by OpenAI. Authorities consider it illegal for an AI to learn from personal information.

Security concerns

When in the hands of unscrupulous individuals, AI instruments can also be turned into weapons. It can be used to create fake data that can potentially bypass cloud computing security systems. The output results can trigger attacks on the system, manage captured data, and generally cause damage.

Data concerns

Source: Unsplash

Experts say that attackers can use generative AI and large language models to scale attacks at an unseen level of speed and complexity, for instance:

  • It can write malicious code to create automated malware, steal data, infect networks, and attack systems with little to no human intervention.
  • Attackers can also steal AI models and manipulate them to their benefit, poison or modify data to produce malicious outcomes.
  • Private companies may use AI to monitor and follow people without their knowledge or consent, which violates their right to privacy.
  • AI can be a tool for creating complex phishing scams and various forgeries that can be used for fraud.
  • People create targeted campaigns of misinformation, post false facts, and manipulate public opinion with the help of AI.

It is worth noting that companies attacked by hackers using AI will be forced to take measures to improve the security system. So, here the most effective solution to detect such attacks will be another tool based on machine learning.

Responsibility

The issue of AI decision-making responsibility remains a complex one. In case of errors or negative consequences, it is difficult to determine who is responsible – the developers, system owners, or the technology itself.

And the more AI develops, the more acute the problem becomes. In addition, the development of autonomous tools capable of making decisions about life and death without human intervention raises ethical issues regarding AI technology related to the responsibility of these.

Today, different approaches to AI responsibility are discussed, including:

  • The complete release of anyone from responsibility for the actions of AI (similar to force majeure);
  • Partial exemption from liability (exemption of a specific person from any responsibility and simultaneous compensation payments to victims);
  • Liability for a fault which comes only depending on the fault of a particular person, e.g., producer, developer, person responsible for training AI, owner, user, etc.;
  • Innocent liability (a certain person – most likely the producer – is generally held responsible for the AI system)
  • Personal responsibility of robots provided that robots have legal personality (rights and duties, electronic personality).

Transparency

Systems capable of self-learning and evolving are becoming more and more complex. One of the most significant issues is the question of how a particular AI system makes decisions, as it happens as a result of enormous and complicated algorithms.

The lack of understanding of how artificial intelligence achieves results is one of the reasons for the low level of trust in modern technologies and it may hinder innovation in AI software development. When a transparent AI system starts to work wrong, developers can quickly find the cause of the error. In contrast, it will take much more effort and time to identify the error in a system that is not transparent.

The actions of AI should be transparent to a wide range of parties for several reasons:

  • Transparency is important for users because it builds trust in the system, providing an easy way to understand what the system does and why
  • The validation and certification of AI transparency disclose the processes and show whether it complies with the legislation in force
  • Both lawyers and other experts need transparency when investigating an accident so that it can be easily traced which internal process led to the accident.

Transparency

Source: Unsplash

Reliability

The problem of reliability of AI deserves to be considered in several respects. The first aspect is purely technical and concerns the reliability and safety of the technical foundation of the software. AI definitely minimizes human errors but there is also a risk of error and technical failure in AI systems.

The second aspect concerns the very essence of what AI is used for. Since it deals with loosely formalized tasks, plausible reasoning, and such mechanisms, we don’t always expect AI’s solutions to be optimal and the only correct solution. Instead, we consider its decisions as reasonable, suitable, or appropriate. It is difficult to prove its correctness, so it’s vital to improve the reliability of AI algorithms to also increase trust in them.

Develop reliable AI solutions with InData Labs
Develop reliable and ethical solutions that would empower your business and increase profits.
Book a call

Job displacement

McKinsey Global Institute examined how the rise of AI and ethical issues could impact US employment in the years ahead and found out that nearly 12 million Americans in occupations with shrinking demand may need to switch jobs by 2030. Moreover, at least 14% of employees globally could need to change their careers due to digitization, robotics, and AI advancements by 2030.

Researchers from Nexford University reported that jobs that are most likely to be automated include customer service reps, receptionists, accountants, salespeople, and people engaged in retail and warehouse work. Nevertheless, the issue is not all bad – a speedy advance of AI urges more and more people to be able to develop, maintain, and work with such systems, too, which creates many vacancies in the job market.

How to develop ethical AI?

As we have mentioned above, there exist several issues that artificial intelligence presents today. However, each of them can be minimized at different levels of control. Let us demonstrate what is being done today to minimize AI’s downsides and what businesses can do to develop an AI model that will align with all the standards and requirements.

Double control

It is sensible to put standardization when working on your AI strategy. It will set a minimum level below which the system cannot be considered reliable. At the same time, this bar should not exceed the capacity of today’s technologies so that developers are not limited in the process of creating new breakthrough solutions and do not withdraw part of the technology from the legal market.

It is also important that specialists in hazardous industries like medicine and education maintain a stable high level of qualification. You can not reduce the level of education of the operator of dangerous production or a doctor so that they only know how to work with the system and which buttons to press. On the contrary, each of them must have a detailed understanding of their area. No matter how clever and intelligent the system may be, we need the second level of human control, and the relevant professionals must be trained properly.

Enhanced datasets

Quote

Now let’s dwell on the counterfactual fairness methods to fight prejudice. To formulate an unbiased judgment about a person, an AI model forms a hypothetical situation in which this person has the opposite characteristics – a woman turns into a man, a poor African into a white American, and so on. Thus, the real status doesn’t affect the assessment of a person’s actions, as judgment is formed in a hypothetical situation. Such judgment is considered free from bias and, therefore, fair.

Decision explanations

Explainability comes as an integral component of AI TRiSM strategies, which many companies adopt today to build trustworthy and transparent AI. The explanatory component of AI should show the process of reaching a decision without revealing the whole mechanics of its functioning.

Otherwise, the credibility and value of artificial intelligence will be questionable. For instance, the expert system should be able to show the user the whole chain of reasoning, and the system of data mining should give out its formed hypotheses in an explicit, human-friendly form.

Global standards legislation

Countries should cooperate to work out laws to control the development and capabilities of artificial intelligence. UNESCO, the International Organization for Standardization, the African Union, and the Council of Europe are all working on multilateral AI governance frameworks.

Standard adoption

Source: Unsplash

To support the initiative, UN Secretary-General Antonio Guterres announced the creation of a 39-member advisory body in October 2023. It aims to harness AI for the common good and formulate global standards for regulating AI. The five goals of the body outlined are the following:

  • Help countries maximize the benefits of AI;
  • Address existing and future threats and risks;
  • Create and implement international monitoring and control algorithms;
  • Collect AI expertise and make it available to the global community;
  • Create more suitable AI for sustainable development.

As for AI’s responsibility, the European Parliament notes that human beings should now be responsible for robots. Recently, it adopted a resolution on civil law standards for robotics. The paper proposed a model for holding responsible a person who could have minimized risks and consequences but failed to do so.

However, in the future, complex autonomous robots can obtain the status of electronic persons and take responsibility for any harm caused. They may also have rights and duties if they enter into legal relationships with others.

Summing up

The benefits of using artificial intelligence technologies are undeniable; their application has indeed improved the performance of many companies and brought them to a new level of customer experience, data security, and operational efficiency.

However, legal and ethical regulations regarding the development of AI tools and their further advances are still at stake. Addressing these challenges requires coherence among governments and private companies alike.

Nevertheless, quality AI consulting and planning is the only right way for us to build intelligent, trustworthy, secure, and fair systems that will bring about positive change worldwide and drive society forward.

FAQ

  • To develop an ethical AI, you have to establish principles that guide the creation of AI systems with fairness, transparency, and respect for privacy. This involves considering the potential impacts on society and individuals and actively working to mitigate any negative effects. Today, many businesses go for AI TRiSM strategies which are aimed at building an ethical, secure, and trustworthy system.

  • The solution to ethical AI issues consists of several steps. First, you have to implement comprehensive ethical frameworks which the system will have to follow. Secondly, make sure the training data is complete, clean, and doesn’t include any biases either. Finally, perform continuous monitoring and be prepared to make necessary adjustments to the algorithm.

  • Today’s AI raises concerns about bias and discrimination (based on a person’s sex, race, age, etc.), privacy violations, and the potential for misuse. It can sometimes lead to an impact on employment and legal decisions. Additionally, there are worries about the accountability of AI decisions and the need for transparency in AI systems.

  • Researchers are aware of what factors can eventually lead to unfair system behavior and conduct thorough ethical impact assessments. They work in collaboration with ethicists and other parties of interest to make sure their AI adheres to established ethical guidelines. In addition, researchers put focus on developing explainable AI and minimizing biases in training data and algorithms.

  • To prevent it, it’s essential to set clear ethical boundaries that AI applications have to adhere to. Additionally, it requires enforcing regulations that hold developers and users accountable. We also should not forget about the training data which should be clean, diverse, and absent from prejudices to exclude the possibility of unethical behavior.

AI consulting & development services Reach out for a consultation on ethical AI development and discuss the possible solutions to your business challenges. Let's talk

    Subscribe to our newsletter!

    AI and data science news, trends, use cases, and the latest technology insights delivered directly to your inbox.

    By clicking Subscribe, you agree to our Terms of Use and Privacy Policy.