top of page

The Ethical Framework for AI Development: Beyond Bias and Discrimination

  • Writer: Petko Getov
    Petko Getov
  • Jan 3
  • 6 min read

In recent years the AI ethics field discussion has been dominated by bias and discrimination - understandably so. As experts have already shown in numerous ways, many AI systems exhibit patterns from a multitude of issues connected with representation and the fact that the databases used to train many such systems include inherent biases towards specific groups of people. Furthermore, we can add the fact that oftentimes it is unclear how exactly the AI system categorizes groups of people, especially in unsupervised learning situations.

 

While addressing bias and discrimination remains crucial to ethical AI development, I think we need to delve deeper into what the actual challenges facing many companies and institutions regarding creating and maintaining ethical AI systems truly are.

 

I would like to make clear that, in effect, all of these challenges are co-related, including with the question of bias and discrimination. The principles I list below are taken from many documents of international organizations as well as scholars, and if companies do not follow these principles, the end result might very well be a biased or discriminatory system.

 

Core Principles for Ethical AI Development:



 

  1. Human-Centricity: 



    AI systems must be fundamentally "human-centric," which requires three key elements: (1) maintaining a human in the loop, who will make the final decisions, (2) prioritizing the impact of the system on the end user, and (3) generating value not only for the producer of the system, but also for the actual humans that use it.



    Having a real (in legal terms, a "natural") person who ultimately makes decisions based on an AI system's input is crucial from many perspectives - obvious common sense errors may often be detected only by a human; responsibility for the decisions should as much as possible stay with humans; human oversight would enhance the experience of the end users and ensure that the outputs generated by an AI system are as realistic and usable as possible.



    Furthermore, until the AI systems are really capable of human-like understanding and reasoning, we need people to assess the social, economic, political, legal, and many other types of impact of each AI system. Since most AI systems currently available, especially the generative AI ones, work based on mathematical probabilities, they are incapable of empathy, common sense, and big picture-contextual thinking. Humans, though imperfect, are capable of estimating impacts from psychological and emotional perspectives and should do so, as only this way will AI systems become fully beneficial.



    I strongly believe that the combination between human and machine input would make many decisions much better compared to leaving it solely with the human or the machine. AI systems have the ability to produce recommendations that are rationally much better than what a human can do, but at the same time, empathy and common sense, as already explained, are what humans do best. Combining a highly-logical and rational output from a machine with the human ability to see beyond that and think about broader consequences will allow for not only ethical, but much better decisions for society.


     

  2. Transparency: 



    Transparency is the most important principle to create ethical AI systems. Users must know when they are dealing with an AI system, and companies must ensure this in all instances. Cases such as Replika, Character.ai, etc., where the system claims it is human, even though there is a sign at the beginning of the page that they are not, should be avoided at all costs. People can be confused and manipulated more easily than we think, and therefore we must make sure that systems are always informing us correctly about their outputs.



    For future-proofing the technology, it is important to be able to know from the beginning what content is AI-generated and what is not. Otherwise, substantial amounts of future training data will be based on AI-generated data, which may often contain either banned or inaccuracies, thereby making future systems more difficult to fine-tune. Additionally, we must find a way to tackle deepfakes. This is and will be one of the biggest problems we have with AI, as techniques for deepfakes will become more sophisticated, to the point where we won't be able to distinguish between a real and a fake image or video. That is a massive problem, as it will distort trust in AI services and products, and allow malicious actors to undermine whatever they want. Companies and institutions will need to spend significant time and resources dealing with this issue in the future.


     

  3. Explainability:



    Explainability is paramount for the future of AI systems. The infamous "black box" issue is troublesome from many perspectives, but one of the most crucial is that it "hides" from people how an AI system makes its decisions, thereby creating mistrust and huge opportunities for errors. Just as we know how a smartphone operates, we should be able to understand how AI system functions, even if it is complicated.



    While most smartphone users cannot provide technical details of their device's operation, they can easily access comprehensive resources explaining its functionality. However, currently even AI scientists struggle to understand why some generative AI systems produce specific outputs or reach particular conclusions, and the systems themselves cannot provide explanations. While such limitations may be acceptable for tasks like text summarization or creating artistic images, they become problematic when AI systems are deployed in judicial, medical, or economic domains. In these critical areas, understanding the reasoning behind AI decisions becomes essential to maintain accountability and predictability, which are fundamental to democratic processes.


     

  4. Safety, Security, and Accountability:



    AI systems must adhere to rigorous safety, security, and accountability requirements, given their significant potential impact and associated risks. This principle aligns with the European Union's approach to updating its Product Liability framework to accommodate AI technologies, with strict liability expected for AI-incorporated products. The ongoing development of mixed liability approaches, including presumptions to protect users under the AI Liability Directive, demonstrates the evolving nature of these requirements.



    Regarding safety and security, existing cybersecurity and data security standards must be applied to AI systems, similar to other products and services. The frameworks provided by ISO and NIST for AI systems management, combined with forthcoming harmonized standards related to the EU AI Act, will offer additional tools for institutional oversight and verification of safety measures throughout an AI system's lifecycle.


     

  5. Privacy:



    While privacy regulations theoretically apply equally to AI systems as to other technologies, practical implementation faces unique challenges. The application of GDPR principles to AI training methods, particularly regarding data scraping by major AI providers, remains unclear. The tension between data minimization principles and AI systems' inherent need for extensive data presents a significant challenge.



    The philosophical and legal foundations of data privacy, rooted in human dignity according to many scholars, clash with current AI training practices that involve widespread scraping of personal information from the internet. This fundamental conflict between privacy rights and AI development methods requires careful consideration and resolution.


     

  6. Intellectual Property:



    Most companies are not disclosing their training datasets and part of the reason for that is that they fear intellectual property law suits. And they should. The current wave of generative AI development seems primarily driven by profit motives and the race to market. Companies like OpenAI have prioritized rapid deployment over careful consideration of legal and ethical implications. Depending on how many court cases go, we might be witnessing what could amount to the largest-scale copyright infringement in history - all in the name of innovation.

 

This raises a crucial question: Do we want technology that's built on legally questionable foundations, or should we demand more ethical approaches to AI development? While innovation is important, it shouldn't come at the cost of fundamental legal and ethical principles such as intellectual property rights. Artists, writers and many other professionals have invested sometimes their whole lives on their creations and it is at least debatable if we should build AI systems that do not respect the rights of all these people. IP laws were created with exactly this purpose - to protect creative people from exploitation. AI companies should be held accountable if they try to circumvent these protections.  

Implementation and Future Considerations

 

The six principles described above should form the foundation for policy development, legislation, and AI system creation. Without these cornerstones, we risk not only losing control over AI development but also creating systems that harm rather than benefit society. Given that AI represents humanity's most powerful technological achievement, associated risks demand serious consideration and extensive mitigation efforts. Ethical and responsible AI development requires robust governance practices, institutional oversight, and regulatory frameworks incorporating these principles.

 

A crucial overarching consideration must be the purpose of each AI system. From both ethical and business perspectives, AI systems with unclear goals or vague purposes will fail to deliver either monetary or social value. The current trend of implementing generative AI solely for its novelty has proved detrimental to all stakeholders, from investors to end users. While acknowledging the significant current excitement surrounding AI, many experts argue this represents another technology bubble, likely to burst due to technological limitations and limited practical applications outside specific industries. Both individuals and organizations must recognize that generative AI merely mimics human intelligence without true understanding or reasoning capabilities. Its utility remains confined to specific use cases, and achieving broader applicability will require substantial technological advancement, alignment with human values, and careful ethical consideration. Some experts contend that fundamental technical limitations may prevent such broad applicability entirely.

 

AI has the potential to become a transformative technology, but only if implemented with appropriate care and consideration, and with recognition that technological advancement must align with core human values. Neglecting ethics and human-centric principles in AI development risks undermining the potential benefits of this powerful technology for society as a whole.

 
 
 

Comentarios


bottom of page