Navigating the Rise of AI: Ensuring Ethical AI in Research

Artificial intelligence (AI) is a technology that enables machines and computers to simulate human intelligence and solve a variety of problems.1

The rapid increase in AI use in many industries has also elevated ethical concerns. The term “ethical AI” entails creating and implementing AI systems with greater trustworthiness, transparency, fairness, and alignment with human values.2

​​​​​​​Image Credit: Stock-Asso/Shutterstock.comImage Credit: Stock-Asso/Shutterstock.com

The Rapid Increase in AI Use

As stated above, AI is a technology that allows computers to perform cognitive functions like learning, problem-solving, reasoning, and interacting with the environment, like the human mind.3

AI encompasses machine learning (ML) and deep learning (DL), which enable the creation of models that can learn from available data.4 Greater training of these models via algorithms would offer more accurate predictions and classifications.

Many times, AI is combined with other technologies, such as robotics, sensors, and geolocation, that require human intervention to perform. AI is applied to both simple and complex real-world problems.

For example, several AI applications, such as GPS guidance, autonomous vehicles, and generative AI tools (e.g., Open AI's Chat GPT), are also used in our daily lives.5

The Need for Ethical Considerations of AI

The increase in AI applications has triggered many debates and conversations about ethical AI and responsible AI. Technologists, ethicists, and policymakers often discuss whether AI can surpass human capabilities in the future.

Furthermore, it has threatened individual’s privacy through unauthorized access to sensitive information and data breaches.6 Some important ethical concerns linked to AI applications are discussed below:

Transparency

The majority of AI systems operate in a black box, which means the system is fed with data (input), and it offers certain decisions (output) without having any clue about its internal workings.7

Understanding how a certain decision is made is important, particularly for AI applications in healthcare and autonomous vehicles. Transparency in AI systems could prevent errors and ensure the implementation of appropriate corrective actions.

Researchers are currently focussing on developing explainable AI to overcome black box challenges. An explainable AI would reduce potential bias and provide accurate predictions.

Bias and Discrimination

AI models are trained on massive datasets, which may contain societal biases.8 Therefore, AI algorithms trained with these biases would generate unfair or discriminatory outcomes in multiple domains, including hiring, resource allocation, and criminal justice.

For example, one could consider a company utilizing an AI system to screen resumes of job applicants. If such a system were trained on historical data of successful hires within the company, this could perpetuate pre-existing gender or racial biases in subsequent hiring decisions.

Ownership

AI advances faster than regulators can keep up. In the context of AI-generated art, it has been difficult to clarify ownership rights to protect potential infringements. 

Job Displacement

AI automation has the potential to replace human jobs, which could increase unemployment and enhance economic inequalities.9

Social Manipulation

Fake news and misinformation have become common in politics and competitive business. AI algorithms can spread misinformation and manipulate public opinion.

The generation of realistic yet fabricated audiovisual content using deep fake technology can significantly endanger political stability.10

Ethics of AI: Challenges and Governance

Strategies to Ensure Ethical AI

Ethical AI focuses on implementing AI systems that are accountable, transparent, and aligned with human rights and values. Transparency entails the ability of AI systems to explain the decision-making processes.

Several strategies, such as model interpretation, have been employed to improve transparency in AI. Model interpretation involves visualization of the internal working of AI systems to understand the logic behind a decision.11 better

Counterfactual analysis is another strategy for improving AI transparency. This analysis involves testing hypothetical scenarios to understand how the AI system responds.

A key advantage of counterfactual analysis is that it enables humans to comprehend how an AI system arrived at a specific decision and detect and rectify errors or biases.12

To improve AI transparency, Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) can also be employed to explain the output of any machine learning model.13

Biases in the data used to train algorithms lead to unfair decisions or discrimination. Therefore, it is imperative to choose authentic data sources.

Implementing strategies like data augmentation enables adding and altering data to design a varied dataset. AI researchers and engineers must constantly assess algorithms to identify and correct biases that may arise over time.14

Since data fuels AI systems, it is important to ensure that it is collected and used in accordance with all legal and ethical obligations.

Furthermore, an individual must be entitled to data privacy throughout AI development and deployment. Incorporation of moral guidelines and ideals while training algorithms could create ethical AI systems.15

A diverse group of stakeholders must be considered in the design of AI models. To prevent biased or unethical AI, it is important to include datasets comprising diverse representatives by ethnicity, gender, educational background, socioeconomic status, values, and beliefs.

Evaluating an algorithm’s fairness and data security and spotting potential biases also help minimize AI bias. Companies must use unbiased data to avoid perpetuating societal biases in AI models.

Developing standards, norms, and guidelines for creating and using AI systems is important. These rules and regulations must focus on accountability, data gathering, storage, and algorithms for decision-making, which would ensure the ethical and responsible use of AI.

For instance, the Algorithmic Accountability Act of 2022 requires companies in the United States to assess the impact of their AI systems on multiple factors, including discrimination, bias, and privacy, and to take necessary steps to mitigate any negative effects.16

European Commission’s High-Level Expert Group has developed Ethics Guidelines for Trustworthy Artificial Intelligence that offers a framework for the development of responsible and ethical AI systems.

References

  1. Collins C, et al. Artificial intelligence in information systems research: A systematic literature review and research agenda. International Journal of Information Management. 2021; 60, 102383. https://doi.org/10.1016/j.ijinfomgt.2021.102383
  2. Díaz-Rodríguez N, et al. Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Information Fusion. 2023; 99, 101896. https://doi.org/10.1016/j.inffus.2023.101896
  3. Bajwa J, et al. Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthc J. 2021;8(2):e188-e194. doi:10.7861/fhj.2021-0095
  4. Sarker IH. AI-Based Modeling: Techniques, Applications and Research Issues Towards Automation, Intelligent and Smart Systems. SN Comput Sci. 2022;3(2):158. doi:10.1007/s42979-022-01043-x
  5. Soori M, et al. Artificial intelligence, machine learning and deep learning in advanced robotics, a review. Cognitive Robotics. 2023; 3, 54-70. https://doi.org/10.1016/j.cogr.2023.04.001
  6. Rawas S. AI: the future of humanity. Discov Artif Intell. 2024; 4, 25. https://doi.org/10.1007/s44163-024-00118-3
  7. Hassija V, et al. Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence. Cogn Comput. 2024; 16, 45–74. https://doi.org/10.1007/s12559-023-10179-8
  8. Ferrara E. Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies. Sci. 2024; 6(1):3. https://doi.org/10.3390/sci6010003
  9. Wang X, et al. How artificial intelligence affects the labour force employment structure from the perspective of industrial structure optimisation. Heliyon. 2024;10(5):e26686.. doi:10.1016/j.heliyon.2024.e26686
  10. Raman R, Kumar Nair V, Nedungadi P, et al. Fake news research trends, linkages to generative artificial intelligence and sustainable development goals. Heliyon. 2024;10(3):e24727. doi:10.1016/j.heliyon.2024.e24727
  11. Yu L, Li Y. Artificial Intelligence Decision-Making Transparency and Employees' Trust: The Parallel Multiple Mediating Effect of Effectiveness and Discomfort. Behav Sci (Basel). 2022;12(5):127. doi:10.3390/bs12050127
  12. Tešić M, Hahn U. Can counterfactual explanations of AI systems' predictions skew lay users' causal intuitions about the world? If so, can we correct for that?. Patterns (N Y). 2022;3(12):100635. doi:10.1016/j.patter.2022.100635
  13. Sathyan A, et al. Interpretable AI for bio-medical applications. Complex Eng Syst. 2022;2(4):18. doi:10.20517/ces.2022.41
  14. Timmons AC, et al. A Call to Action on Assessing and Mitigating Bias in Artificial Intelligence Applications for Mental Health. Perspect Psychol Sci. 2023;18(5):1062-1096. doi:10.1177/17456916221134490
  15. Rodgers W, et al. An artificial intelligence algorithmic approach to ethical decision-making in human resource management processes. Human Resource Management Review. 2024; 33(1), 100925. https://doi.org/10.1016/j.hrmr.2022.100925
  16. Mökander J., et al. The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: what can they learn from each other?. Minds & Machines. 2022; 32, 751–758. https://doi.org/10.1007/s11023-022-09612-y

Further Reading

Last Updated: Jun 27, 2024

Dr. Priyom Bose

Written by

Dr. Priyom Bose

Priyom holds a Ph.D. in Plant Biology and Biotechnology from the University of Madras, India. She is an active researcher and an experienced science writer. Priyom has also co-authored several original research articles that have been published in reputed peer-reviewed journals. She is also an avid reader and an amateur photographer.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Bose, Priyom. (2024, June 27). Navigating the Rise of AI: Ensuring Ethical AI in Research. AZoLifeSciences. Retrieved on December 22, 2024 from https://www.azolifesciences.com/article/Navigating-the-Rise-of-AI-Ensuring-Ethical-AI-in-Research.aspx.

  • MLA

    Bose, Priyom. "Navigating the Rise of AI: Ensuring Ethical AI in Research". AZoLifeSciences. 22 December 2024. <https://www.azolifesciences.com/article/Navigating-the-Rise-of-AI-Ensuring-Ethical-AI-in-Research.aspx>.

  • Chicago

    Bose, Priyom. "Navigating the Rise of AI: Ensuring Ethical AI in Research". AZoLifeSciences. https://www.azolifesciences.com/article/Navigating-the-Rise-of-AI-Ensuring-Ethical-AI-in-Research.aspx. (accessed December 22, 2024).

  • Harvard

    Bose, Priyom. 2024. Navigating the Rise of AI: Ensuring Ethical AI in Research. AZoLifeSciences, viewed 22 December 2024, https://www.azolifesciences.com/article/Navigating-the-Rise-of-AI-Ensuring-Ethical-AI-in-Research.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoLifeSciences.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Fabric Genomics partners with Intermountain Children’s Health to enhance precision diagnosis of infants and children using whole genome sequencing from Broad Clinical Labs