top of page

WELCOME

Your guide to the ethical, legal and regulatory implications of AI

Search Results

14 results found with an empty search

  • AI without ethics stifles innovation

    Innovation can be enhanced by ethics and law While legal instruments like the EU AI Act, US and Chinese regulations, as well as governance frameworks like the NIST AI Framework focus on managing risks from existing AI systems, they often operate reactively—addressing problems after technologies have been deployed. This article proposes a fundamental shift: integrating ethics into the earliest stages of AI development rather than merely imposing restrictions afterward. By examining historical lessons from pharmaceutical, nuclear, and biotechnology governance, I argue that AI systems developed without foundational ethical frameworks cannot truly benefit society. More importantly, I propose concrete mechanisms to transform governance from a protective measure into an innovative force that rewards ethical foresight. There are many governance frameworks and now also laws already in existence, and all of them focus on managing the risk from AI systems in one way or the other - the EU AI Act is a product safety regulation, the NIST AI framework creates a guideline risk management, the White House Executive Order has a similar objective. Their idea is protection. It's a noble idea and it is indeed absolutely necessary. But it deals with the risks post factum. Most of these frameworks work under the assumption that e.g. ChatGPT is already there, it is being used and we need to somehow frame it. I would like to shift to a different perspective - a more philosophical and ethical one, however with very practical consequences: do we consider why we create AI and how is it aligned ethically with our values? I have not seen this question being asked in most frameworks, because they are meant to be practical and help the businesses grow their already created products. But what if we ask those questions BEFORE a product is created? What if we introduce a basic requirement that an AI use case must be ethically vetted, before it's creation, before the data is being trained, before parameters and weights are being introduced? Olivia Gambelin has already talked about this in her course "AI Ethics" (which I highly recommend) - ethics can indeed be used as an innovation tool AT THE START OF the creation of a new technology. She gives the example of the app "Signal", which uses a privacy by design concept, so before the product was created, it already included the ethical consideration that other similar communication services like WhatsApp are lacking. And that got me thinking: shouldn't all companies actually do that? Should we encourage our innovators to be ethical to begin with, not only scare them with fines when they misbehave, but reward them when they do well from the start? My point is that we need to really explore way more ethical and philosophical discussions about AI. Think about it: what if regulation was not there only to protect, but to incentivize companies to include ethical and philosophical considerations from the start? For example, introduce a basic new step in the AI Act that requires companies that create or use AI systems to look not only at the efficiency gains of their business model, but at the consequences the systems might have for society. What if regulation was not there only to protect, but to incentivize companies to include ethical considerations from the start? That could include an "Ethics Review Boards" within the AI Office that has real regulatory power and that can require companies to perform "social impact assessment" of their product at a very early stage of development. A company can show that the results of such assessment are a socially desirable and positive outcome (e.g. privacy by design like Signal; company reveals its datasets, proving its variety and showing how bias is tackled at the first steps of development; a chatbot that discloses all the copyrighted material used and provides evidence that it has licensed all those materials). The reward for such behavior could be a certification of excellence by the EU, certifying that this product is of the highest quality and is recommended for usage compared to other similar products that are only compliant with the strictest requirements, but have not done anything on the ethical side. This is not unheard of - the pharma industry, biotechnology and nuclear technology have all undergone a similar development in the past. All of these industries and technologies started off without much or any specific regulation, then either a huge disaster was caused by them (e.g. Hiroshima/Nagasaki for nuclear energy; the Elixir sulfanilamide disaster in 1937 for pharma was a similar event) or questions were raised soon enough in the case of biotechnology, so that social and ethical concerns were taken into account as part of its development. After such events, governance efforts started and now all of these three examples I gave are highly-regulated industries and society is benefiting from that (e.g. better healthcare and medicine; higher quality research in biotech; nuclear disarmament). All these examples show a pattern in how transformative technology is normally governed: there is an initial phase of rapid and uncontrolled development, then often enough disaster strikes and then there is reactive regulation, which develops into a mature governance process. I argue that for AI, we need to be proactive, and so regulations like the EU AI Act are actually a good thing in principle, even if the details may not yet be that great. The Asilomar Conference of 1975 provides a compelling precedent for proactive ethical governance. When scientists first developed the ability to splice DNA between organisms, they didn't wait for a disaster before establishing guidelines. Instead, leading researchers voluntarily paused their work and established a framework for safe development. This didn't slow progress – it enabled it by building public trust and clear guidelines. Following the Asilomar model, companies developing AI systems could (1) establish clear ethical review processes before beginning development; (2) create transparent documentation of ethical considerations; (3) engage multiple stakeholders in review processes; (4) define clear boundaries for development. Such process would take into account ethical and social concerns which show us in a very specific way what the added value of a given AI system is. The question whether an "intelligent" system is actually contributing to a better society, or is it a cash cow exercise can become much easier to answer. The aforementioned Ethics Review Board could be a part of the AI Office within its responsibility of ensuring Trustworthy AI. It should be staffed by a variety of experts from different fields, focusing on humanities, with the technical experts active in other areas of the AI Office, such as benchmarks creation. It would issue binding recommendations that the AI Office will publish and that should provide the requirements for companies to get a "certificate of excellence". I assume an adjustment of the AI Act would be needed in order to have all the details right. However, I believe that this is a noteworthy initiative, considering all the factors and the ultimate benefits for society. An initial ethical and philosophical review of our systems would not only be better for the majority of people, it would actually be advantageous for the business itself - it has already been shown that ethics can create a competitive advantage and brand loyalty towards a product (Signal vs. WhatsApp is one example - after WhatsApp released its updated T&Cs a few years ago, Signal got millions of new users within a few days due to its stringent approach to data privacy). Also, considering ethical issues at the start helps a lot with compliance, as regulations are merely the foundation upon which an ethical product is built. If you, as a company, think about risks and issues from the start, it is highly likely that you will be easily compliant with any laws, since they deal with exactly that - risks. If, on top of that, you go beyond the mere compliance and create a product that not only fulfills the basic requirements, but enhances protection of rights and combines that with a good user experience and added value - that would mean lower compliance costs in the future due to the robust ethical framework; enhanced trust in the product; and future-proofing against evolving regulations. What all this shows is that innovation isn't limited to technology itself—it extends to how we govern and regulate any new and impactful systems like AI. A thoughtful, balanced approach to AI policy will not only protect citizens more effectively but create opportunities for sustainable growth and competitive advantage. For practitioners and policymakers alike, the path forward is clear: ethical considerations must become the foundation of AI development, not an afterthought. As the AI governance landscape evolves, those organizations that embrace proactive ethical frameworks today will not only avoid regulatory headaches tomorrow but will build more trustworthy, valuable products that stand the test of time. The question we must ask ourselves is not whether we can afford to integrate ethics from the start, but whether we can afford not to.

  • AI's Role in Law: Drawing Clear Boundaries for Democratic Society

    This article deals with the relationship between legal work and artificial intelligence from the perspective of a person who has been working in AI governance and is a legal professional. It aims to explore what impact do generative AI tools specifically have on the legal profession and what part of any judicial work should be done by machines. It focuses on the different aspects of the legal work that GenAI could influence and asks the question what limitations should be imposed on AI in law. I argue that law should remain an exclusively human domain when it comes to core legal functions - drafting, enacting, interpretation, and judgment. While legal professionals may leverage AI tools, their use must be carefully bounded. Technical Limitations of AI in Legal Understanding LLMs fundamentally don't understand the context and the underlying social, political, and economic reasons for laws, judgments, and precedents. Laws and regulations are created based on political platforms - minimum wages, employee protection, and data privacy are examples of "social state" politics that are based on specific values. While LLMs are trained on vast amounts of data, they cannot comprehend how policies are created and formed in people. Humans tend to have the ability to make connections between seemingly unrelated topics in a way machines cannot do that. AI relies mostly on a statistical analysis of the data, whereas humans draw upon complex contextual and "natural" understanding of how society functions to make their decisions or recommendations.   This human ability is particularly relevant in law, because it involves detailed understanding of language, politics, philosophy, ethics and many other spheres of life. Good lawyers have a reasonable chance of getting the consequences of complex situations right, while AI is nowhere near capable enough to be trusted with such judgements. My own experience with the major GenAI tools shows me that they are like a rookie paralegal that knows everything, but cannot make any connections between their knowledge and real-world issues. And that is a limitation that does not look like it's going to be solved any time soon.   Verification of AI's legal reasoning presents another significant challenge. Legal questions may have different answers in the same jurisdiction depending on small changes in facts, let alone across multiple jurisdictions. Recent cases demonstrate these limitations - for instance, ChatGPT's poor performance in straightforward trademark matters shows that even basic legal reasoning remains beyond AI's current capabilities In a court case in the Netherlands, it showed that it is inconsistent in its reasoning and it could not interpret law in a way that is in any way good enough for a real-life situation before a judge. Source: https://www.scottishlegal.com/articles/dr-barry-scannell-chatgpt-lawyer-falls-short-of-success-in-trademark-case      Democratic and Accountability Framework The transparency and accountability implications of AI in law are profound. While humans make mistakes in creating laws, they can be held accountable both politically and legally. Democratic society enables discussions on important topics that expose the motives behind politicians' decisions. As AI systems become more complex, their decision-making becomes increasingly opaque. Since machines cannot (yet) be held accountable in the same way humans can, delegating important decision-making to artificial intelligence would undermine democratic principles. Furthermore, considering how a few enormous companies have taken control over AI technology, allowing AI into core legal functions would effectively place our legal system in their hands - a shift that threatens democratic principles and risks creating an authoritarian framework. Especially now seeing how X and Meta are approaching the issue of "freedom of speech", it becomes abundantly clear that decisions critical for society are not made via representative of the people, they are made by private actors with no regard for democracy or rule of law.   The Human Nature of Law Law, as part of the humanities, is fundamentally about understanding the human condition - something only biological beings can truly comprehend. Legal professionals develop crucial skills and insights through their research and practice that no AI system can replicate. Just as calculator dependence has diminished arithmetic skills, over-reliance on AI in legal work would be detrimental to future generations of lawyers, but with far more serious consequences for society. Legal work is not only like the movies, where a brilliant lawyer argues a case spectacularly in front of judge. The majority of the legal work is done outside of courtrooms and it's important to understand it's crucial part of a democratic society. Outsourcing critical legal decisions to a non-human entity means that we would be outsourcing one of the most critical pillars of democracy - namely people's rights and freedoms - to machines that, as we have established already, have no real understanding of the world. They don't have the concept of fairness, equity and equality coded into them. Furthermore, leaving important decisions to AI systems means that we basically are handling over the judicial systems to the companies creating those systems, creating a huge hole in our democratic oversight. AI systems may produce biased or unclear answer, due to their overreliance on training data and the (very) possible lack of explainability of the decision-making.   Defining Clear Boundaries for AI in Legal Work The role of AI in legal work should follow a clear hierarchical structure: Pure Administrative Tasks (Full AI Utilization Permitted): Document filing and organization Basic text editing and formatting Calendar management Simple template filling Repository maintenance Hybrid Tasks (AI as Transparent Assistant): Legal research combining search and preliminary analysis Initial contract review for standard terms Template creation for complex legal documents Case law database management For these tasks, AI must provide clear documentation of its methodology, sources, and reasoning. Legal professionals retain oversight and decision-making authority, with AI serving as a tool for initial analysis rather than final decisions. Pure Legal Work (Exclusively Human Domain): Legal strategy development Final interpretation of laws and precedents Judicial decision-making Complex negotiation Legislative drafting   These boundaries would help legal professionals to take advantage of the new technologies and use them to become better at their work. On the other hand, vital democratic and judicial principles will still be upheld and accountability of decision-making and algorithmic bias will be dealt with. And on the topic of bias - while both humans and AI systems exhibit biases, human biases can be addressed through training, conscious effort, and professional ethics frameworks. Human decision-making processes can be scrutinized and challenged through established legal and professional channels - something not possible with AI systems. Moreover, automation bias could lead lawyers to overly rely on AI suggestions, potentially missing novel legal arguments or interpretations that might better serve justice.   Conclusion The debate around AI in legal systems isn't just about technological capabilities - it's about the future of democratic governance itself. The limitations of AI in legal contexts extend far beyond technical constraints to touch the very foundations of how we create and maintain just societies. If we allow AI to penetrate core legal functions, we risk creating a dangerous precedent where complex human decisions are delegated to unaccountable systems. Instead of asking how we can integrate AI into legal decision-making, we should be asking how we can leverage AI to support and enhance human legal expertise while maintaining clear boundaries. This means developing explicit frameworks that define where AI assistance ends and human judgment must prevail. The legal profession has an opportunity - and responsibility - to lead by example in showing how emerging technologies can be embraced thoughtfully without compromising the essentially human nature of professional judgment. The future of law doesn't lie in AI replacement, but in human professionals who understand both the potential and limitations of AI tools - and who have the wisdom to keep them in their proper place.

  • Social Scoring 2

    In an age where artificial intelligence and data-driven governance shape our daily lives, social scoring systems have emerged as one of the most debated innovations. These systems use algorithms to assign individuals scores based on their behavior, aiming to incentivize actions that align with societal goals such as trustworthiness and cooperation. While proponents highlight their potential to foster community trust and economic development, critics caution against their capacity to erode privacy, amplify inequalities, and institutionalize surveillance. Social scoring systems rely on vast amounts of behavioral data, which are processed by algorithms to generate scores. These scores, often used to assess trustworthiness, influence access to resources, opportunities, and social standing. Experiments have shown that such systems can boost trust and wealth generation within communities, particularly when their mechanisms are transparent. However, transparency has its downsides. Individuals with lower scores frequently face ostracization and reduced opportunities, exposing the discriminatory potential of these systems. Transparency, while fostering procedural fairness, paradoxically enables reputational discrimination by giving others the tools to rationalize punitive actions against low-scoring individuals. The ethical dilemmas of social scoring extend far beyond transparency. These systems operate in a delicate space where privacy and agency are often compromised. Their reliance on extensive data collection creates a thin line between governance and surveillance. In jurisdictions with weak regulatory frameworks, this surveillance risks becoming a normalized aspect of everyday life, leading to diminished public trust and stifled dissent. Moreover, the opaque decision-making processes in many social scoring implementations exacerbate the problem. When individuals cannot contest or understand how their scores are calculated, the lack of accountability undermines fairness and reinforces systemic biases, particularly against marginalized groups. Efforts to regulate such systems are underway, most notably with the European Union’s AI Act. This legislative framework explicitly prohibits social scoring practices that result in unjustified disparities, classifying them as posing “unacceptable risks.” The Act introduces the concept of “contextual integrity,” requiring scores to be used strictly within the domains for which they were generated. However, while these measures are a step in the right direction, enforcing them remains a significant challenge. Operational gaps persist, leaving room for exploitation and misuse, especially when private entities employ these systems for purposes beyond their original intent. The paradox of transparency in social scoring systems lies in its ability to simultaneously foster and harm. On one hand, transparency increases trust within communities and makes systems appear more legitimate. On the other hand, it also sharpens the social stratification inherent in these mechanisms. Individuals with higher scores are systematically trusted more, while those with lower scores endure social and economic penalties. This dynamic risks perpetuating inequities rather than addressing them. The societal implications of social scoring extend beyond individual experiences. Communities subjected to transparent scoring systems often show reduced inequality and increased collective wealth. However, these aggregate benefits do not erase the harm experienced by low-scoring individuals, who often find themselves locked out of opportunities. The broader societal stratification created by these systems mirrors historical patterns of exclusion, now amplified by the precision of algorithmic governance. As these systems become more prevalent, their regulation must evolve to ensure fairness and inclusivity. While the EU AI Act provides a solid foundation, global frameworks are essential to address disparities in regulation and enforcement. Moreover, the design of social scoring systems must prioritize protecting the rights of vulnerable populations, ensuring that the potential benefits do not come at the cost of individual dignity. The development and implementation of social scoring systems bring us to a critical crossroads. The question is not merely whether we can build such systems but whether we should—and under what conditions. Striking the right balance between innovation and ethics will determine whether social scoring systems become tools for societal empowerment or mechanisms of control. As we shape these technologies, we must ensure they align with the principles of fairness, accountability, and human rights, keeping the well-being of all individuals at their core.

  • Follow up to Petko Getov's Bias article.

    In one of his articles Petko Getov provides an overview of what is Ethics in AI. Linked here:   To follow up on his point I found this interesting example of how training data sets can solidify biases that are very difficult to overcome. The examples I will use are two – watches and people writing. As you can see below – these are AI generated watches – all of them are showing 10:10, but all the prompts used have requested 12:02 to be displayed. Why is that? Well 99% of the watches in every website on the web shows the watches it sells with this time 10:10 as it is the most presentable and it displays the beauty of the design. And when you have trained your image generator – a very strong connection has been established that if you have a watch is has to display 10:10. How can we sort this problem? To iterate the point made in the article above - Human-Centricity :  There must be a human in the loop, there must be prioritization on the impact of the system on the end user and to generate value for the Humans. Another example is a human writing – always the write with their right hand. Again – most of the images of people writing are of people who are right handed as this is the statistical distribution of humans – most are right handed. Now these are obvious problems. But this can be extrapolated to various fields of AI application – credit scoring, social scoring and so on and so on. Bias is part of human nature and it is reflected in the web – the way the web is a mirror of our society. AI is amazing, but it needs to serve us for good!

  • Brief overview of AI from a historical perspective.

    The Remarkable Evolution of Artificial Intelligence: From Theory to Reality How difficult it is to generate hands The Remarkable Evolution of Artificial Intelligence: From Theory to Reality Artificial Intelligence (AI) stands as one of the most transformative technological advancements in human history, a discipline whose roots intertwine with centuries of intellectual exploration in mathematics, logic, and computing. The development of AI has been neither sudden nor isolated, but rather a cumulative process of innovation, each era building upon the breakthroughs of the previous. At its core, AI operates on principles derived from statistical and probability theories. These mathematical foundations enable machines to predict outcomes, generate coherent responses, and mimic linguistic patterns. Despite their sophistication, these systems remain fundamentally programmed. Unlike human beings, they do not possess creativity or the capacity for original thought; their "intelligence" is an intricate web of codes and calculations. Modern AI systems are shaped by three essential components: the datasets that train the algorithms, the algorithms that interpret and learn from the data, and the outputs—the generative capabilities that produce responses or solutions. Large Language Models (LLMs) epitomize this architecture. By processing trillions of data points, these models ensure precise tokenization, a process where input is divided into smaller units such as words or symbols, enabling accurate computation of probabilities. A groundbreaking moment in AI came with the introduction of the transformer model, as detailed in Google’s 2017 paper, Attention is All You Need . This innovation introduced self-reinforcement mechanisms, allowing systems to integrate their own generated outputs into subsequent computations. The result is a striking ability to produce human-like responses to increasingly complex prompts. The journey of AI, however, is far from a modern phenomenon. In 1959, Arthur Samuel coined the term "machine learning," highlighting the potential of computers to adapt and improve through experience. A decade earlier, Alan Turing’s pioneering work on the Turing Test  set the stage for evaluating whether machines could convincingly simulate human communication. Turing’s ideas emerged from even earlier explorations in the 1930s, where mathematicians like Kurt Gödel, Alonzo Church, and Turing himself laid the theoretical groundwork for computability and recursive functions. To fully appreciate AI’s evolution, one must look even further back to the 19th century. Mathematicians such as George Boole and Augustus De Morgan laid the foundation for symbolic logic, a crucial element in AI’s intellectual lineage. Their contributions were expanded by Charles Sanders Peirce and later refined by Gottlob Frege, whose work introduced the concept of quantifiers in logic. The history of Artificial Intelligence illustrates a relentless human pursuit of understanding and innovation. Each discovery, whether theoretical or practical, has contributed to the realization of a technology that now powers everything from conversational agents to complex problem-solving systems. While the journey is marked by centuries of effort, it also serves as a reminder that AI, despite its impressive capabilities, remains a tool—one shaped by human ingenuity and bound by the limitations of its design. Sources Sources: "Attention is All You Need" - Google, 2017. Link " Computing Machinery and Intelligence " - Alan Turing, 1950 Link

  • Is AI actually intelligent?

    In our rush to develop artificial intelligence, we've often overlooked a crucial question: What exactly is intelligence? The traditional view, rooted in Enlightenment thinking, defined intelligence primarily through logical reasoning and mathematical ability. However, this narrow definition is increasingly challenged by modern philosophical understanding. Historical perspectives, from Plato to Descartes, emphasized rational thought as the cornerstone of intelligence. This view shaped early psychological approaches, leading to IQ tests that primarily measured logical-mathematical abilities. However, as our understanding of human cognition and behavior has evolved, we've discovered that intelligence is far more multifaceted.   Modern philosophical discourse recognizes several crucial dimensions of intelligence that extend beyond pure cognition. The concept of embodied intelligence, developed by philosophers like Maurice Merleau-Ponty, suggests that intelligence is inextricably linked to our physical existence and our interaction with the world. This perspective acknowledges that learning often occurs through intuitive, experiential channels rather than purely abstract reasoning. A footballer's ability to kick the ball with just enough power and precision to make a pass or a firefighter's intuitive understanding of how quickly they need to move through a burning building, based on temperature sensations and subtle environmental cues that their body has learned to recognize are both skills that have an enormous role in situations where swift and precise decision-making is key.   Emotional intelligence, once dismissed as irrelevant to cognitive ability, is now recognized as a crucial component of human intelligence. The capacity to understand and regulate emotions, empathize with others, and navigate social situations requires sophisticated mental processes that pure logic cannot replicate. For example, the ability to "read the room" in situations like business negotiations or employment relationships is key for decision-making strategies, and those skills are difficult to emulate by AI.    Perhaps most importantly, intelligence involves what philosophers call "practical wisdom" or phronesis – the ability to make sound judgments in complex, real-world situations where multiple factors must be considered. This type of intelligence cannot be reduced to algorithms or decision trees; it requires a deep understanding of context, consequences, and human values.   As we continue to develop artificial intelligence, these broader conceptions of intelligence raise important questions. Can we create systems that truly replicate human intelligence without incorporating these various dimensions? Should we even try? Or should we instead focus on developing AI that complements, rather than replicates, the unique aspects of human intelligence?   The implications extend beyond AI development. Understanding intelligence in this broader context challenges us to reconsider our educational systems, workplace environments, and social structures. Are we nurturing all aspects of human intelligence, or are we still too focused on traditional cognitive measures?   The evolution of our understanding of intelligence reminds us that human capabilities are far more complex and nuanced than we once thought. As we move forward in the age of artificial intelligence, this broader perspective becomes increasingly crucial for making informed decisions about technology development and implementation. Labelling LLMs as AI, for example, becomes a little more difficult if we try to embed it within phronesis as a concept - can and should an LLM consider emotional reactions of people as part of a complex situation? Can it differentiate between sarcasm and irony? Should we rely on algorithmic systems to provide advice on matters that cannot be resolved solely by logical means?   All of these questions point towards a more complex answer than we are getting from some leaders in AI. They try and provide us with a fairly simple view of what these systems are, sidestepping the real difficult and, sometimes, impossible to answer questions. I argue that the reason is straight-forward - the more complex a system becomes in the eyes of the users, the less inclined they would be to use it and therefore, the less money can be generated. If OpenAI had actually shown the public that they have grappled with these questions and if they have provides some insight regarding their opinion BEFORE launching ChatGPT, I would have thought that they indeed want to develop a tool beneficial for humanity, because they would have shown us they actually care about the consequences of ChatGPT. But since they have not done that, I cannot stop thinking that 99% of their motivation behind developing this tool is money, power and fame. Which just goes to show that cognitive intelligence is not enough.   These philosophical considerations about the nature of intelligence aren't merely academic exercises - they have profound implications for how we develop and deploy AI systems. If we accept that intelligence encompasses more than just computational capability, we must fundamentally rethink our approach to AI development and assessment. This leads us to consider new frameworks that can evaluate AI systems based on this broader understanding of intelligence.   One way would be to develop new assessment frameworks for AI systems that evaluate not just computational capabilities but also ability to understand context, demonstrate adaptability, and show awareness of social implications (similar as an idea to ARC-AGI, but better suited for awareness for social implications). Such standards would be developed primarily by independent researchers that are not related to any STEM field, but come strictly from social, legal and political sciences, as well as psychology. Technical specialists should be purposefully kept out of the process, as their bias towards technology may disrupt the fairness and accuracy of the testing. The chosen experts would provide testing mechanisms that are created specifically to challenge an AI system on various types of intelligence. The tasks would not be merely cognitive or logical, but would involve understanding that come to humans (in most cases) naturally. Here is how such tasks might look like:   Context Understanding Assessment: Present AI systems with ambiguous social scenarios where cultural context dramatically changes the appropriate response Test ability to recognize and respond to emotional subtext in communication Evaluate understanding of historical and cultural references that shape meaning   Adaptive Response Testing: Assess how systems modify their responses when the same question is asked by different demographic groups Test ability to recognize when technical accuracy should be balanced against social sensitivity Evaluate capability to acknowledge uncertainty in complex social situations   Social Impact Awareness: Test ability to recognize potential negative societal implications of its own suggestions Assess capability to identify vulnerable populations that might be affected by its recommendations Evaluate understanding of how its responses might influence human behavior and decision-making   Such testing will enable a more complete picture of an AI system's impact and allow scientists, users and businesses to more accurately assess the social impact of deploying such as system.   I understand very well that some people will dismiss my suggestion as impractical and "over-regulated". There are, however, historical precedents of technology being influenced by ethical and philosophical concerns: the evolution of MRI technology was significantly shaped by ethical considerations about radiation exposure and patient welfare. Early researchers like Paul Lauterbur and Peter Mansfield deliberately pursued non-ionizing radiation approaches, despite them being more technically challenging, because of philosophical and ethical concerns about patient safety. This ethical priority led to the development of safer imaging technologies that we rely on today.   I believe, therefore, that we need a much wider debate on the topic of AI that does include many more suggestions from people with a humanitarian profile like me, as otherwise "progress" is happening without discussion. Which is neither democratic, nor intelligent.

  • The Ethical Framework for AI Development: Beyond Bias and Discrimination

    In recent years the AI ethics field discussion has been dominated by bias and discrimination - understandably so. As experts have already shown in numerous ways, many AI systems exhibit patterns from a multitude of issues connected with representation and the fact that the databases used to train many such systems include inherent biases towards specific groups of people. Furthermore, we can add the fact that oftentimes it is unclear how exactly the AI system categorizes groups of people, especially in unsupervised learning situations.   While addressing bias and discrimination remains crucial to ethical AI development, I think we need to delve deeper into what the actual challenges facing many companies and institutions regarding creating and maintaining ethical AI systems truly are.   I would like to make clear that, in effect, all of these challenges are co-related, including with the question of bias and discrimination. The principles I list below are taken from many documents of international organizations as well as scholars, and if companies do not follow these principles, the end result might very well be a biased or discriminatory system.   Core Principles for Ethical AI Development:   Human-Centricity:  AI systems must be fundamentally "human-centric," which requires three key elements: (1) maintaining a human in the loop, who will make the final decisions, (2) prioritizing the impact of the system on the end user, and (3) generating value not only for the producer of the system, but also for the actual humans that use it. Having a real (in legal terms, a "natural") person who ultimately makes decisions based on an AI system's input is crucial from many perspectives - obvious common sense errors may often be detected only by a human; responsibility for the decisions should as much as possible stay with humans; human oversight would enhance the experience of the end users and ensure that the outputs generated by an AI system are as realistic and usable as possible. Furthermore, until the AI systems are really capable of human-like understanding and reasoning, we need people to assess the social, economic, political, legal, and many other types of impact of each AI system. Since most AI systems currently available, especially the generative AI ones, work based on mathematical probabilities, they are incapable of empathy, common sense, and big picture-contextual thinking. Humans, though imperfect, are capable of estimating impacts from psychological and emotional perspectives and should do so, as only this way will AI systems become fully beneficial. I strongly believe that the combination between human and machine input would make many decisions much better compared to leaving it solely with the human or the machine. AI systems have the ability to produce recommendations that are rationally much better than what a human can do, but at the same time, empathy and common sense, as already explained, are what humans do best. Combining a highly-logical and rational output from a machine with the human ability to see beyond that and think about broader consequences will allow for not only ethical, but much better decisions for society.   Transparency:  Transparency is the most important principle to create ethical AI systems. Users must know when they are dealing with an AI system, and companies must ensure this in all instances. Cases such as Replika, Character.ai , etc., where the system claims it is human, even though there is a sign at the beginning of the page that they are not, should be avoided at all costs. People can be confused and manipulated more easily than we think, and therefore we must make sure that systems are always informing us correctly about their outputs. For future-proofing the technology, it is important to be able to know from the beginning what content is AI-generated and what is not. Otherwise, substantial amounts of future training data will be based on AI-generated data, which may often contain either banned or inaccuracies, thereby making future systems more difficult to fine-tune. Additionally, we must find a way to tackle deepfakes. This is and will be one of the biggest problems we have with AI, as techniques for deepfakes will become more sophisticated, to the point where we won't be able to distinguish between a real and a fake image or video. That is a massive problem, as it will distort trust in AI services and products, and allow malicious actors to undermine whatever they want. Companies and institutions will need to spend significant time and resources dealing with this issue in the future.   Explainability: Explainability is paramount for the future of AI systems. The infamous "black box" issue is troublesome from many perspectives, but one of the most crucial is that it "hides" from people how an AI system makes its decisions, thereby creating mistrust and huge opportunities for errors. Just as we know how a smartphone operates, we should be able to understand how AI system functions, even if it is complicated. While most smartphone users cannot provide technical details of their device's operation, they can easily access comprehensive resources explaining its functionality. However, currently even AI scientists struggle to understand why some generative AI systems produce specific outputs or reach particular conclusions, and the systems themselves cannot provide explanations. While such limitations may be acceptable for tasks like text summarization or creating artistic images, they become problematic when AI systems are deployed in judicial, medical, or economic domains. In these critical areas, understanding the reasoning behind AI decisions becomes essential to maintain accountability and predictability, which are fundamental to democratic processes.   Safety, Security, and Accountability: AI systems must adhere to rigorous safety, security, and accountability requirements, given their significant potential impact and associated risks. This principle aligns with the European Union's approach to updating its Product Liability framework to accommodate AI technologies, with strict liability expected for AI-incorporated products. The ongoing development of mixed liability approaches, including presumptions to protect users under the AI Liability Directive, demonstrates the evolving nature of these requirements. Regarding safety and security, existing cybersecurity and data security standards must be applied to AI systems, similar to other products and services. The frameworks provided by ISO and NIST for AI systems management, combined with forthcoming harmonized standards related to the EU AI Act, will offer additional tools for institutional oversight and verification of safety measures throughout an AI system's lifecycle.   Privacy: While privacy regulations theoretically apply equally to AI systems as to other technologies, practical implementation faces unique challenges. The application of GDPR principles to AI training methods, particularly regarding data scraping by major AI providers, remains unclear. The tension between data minimization principles and AI systems' inherent need for extensive data presents a significant challenge. The philosophical and legal foundations of data privacy, rooted in human dignity according to many scholars, clash with current AI training practices that involve widespread scraping of personal information from the internet. This fundamental conflict between privacy rights and AI development methods requires careful consideration and resolution.   Intellectual Property: Most companies are not disclosing their training datasets and part of the reason for that is that they fear intellectual property law suits. And they should. The current wave of generative AI development seems primarily driven by profit motives and the race to market. Companies like OpenAI have prioritized rapid deployment over careful consideration of legal and ethical implications. Depending on how many court cases go, we might be witnessing what could amount to the largest-scale copyright infringement in history - all in the name of innovation.   This raises a crucial question: Do we want technology that's built on legally questionable foundations, or should we demand more ethical approaches to AI development? While innovation is important, it shouldn't come at the cost of fundamental legal and ethical principles such as intellectual property rights. Artists, writers and many other professionals have invested sometimes their whole lives on their creations and it is at least debatable if we should build AI systems that do not respect the rights of all these people. IP laws were created with exactly this purpose - to protect creative people from exploitation. AI companies should be held accountable if they try to circumvent these protections.   Implementation and Future Considerations   The six principles described above should form the foundation for policy development, legislation, and AI system creation. Without these cornerstones, we risk not only losing control over AI development but also creating systems that harm rather than benefit society. Given that AI represents humanity's most powerful technological achievement, associated risks demand serious consideration and extensive mitigation efforts. Ethical and responsible AI development requires robust governance practices, institutional oversight, and regulatory frameworks incorporating these principles.   A crucial overarching consideration must be the purpose of each AI system. From both ethical and business perspectives, AI systems with unclear goals or vague purposes will fail to deliver either monetary or social value. The current trend of implementing generative AI solely for its novelty has proved detrimental to all stakeholders, from investors to end users. While acknowledging the significant current excitement surrounding AI, many experts argue this represents another technology bubble, likely to burst due to technological limitations and limited practical applications outside specific industries. Both individuals and organizations must recognize that generative AI merely mimics human intelligence without true understanding or reasoning capabilities. Its utility remains confined to specific use cases, and achieving broader applicability will require substantial technological advancement, alignment with human values, and careful ethical consideration. Some experts contend that fundamental technical limitations may prevent such broad applicability entirely.   AI has the potential to become a transformative technology, but only if implemented with appropriate care and consideration, and with recognition that technological advancement must align with core human values. Neglecting ethics and human-centric principles in AI development risks undermining the potential benefits of this powerful technology for society as a whole.

  • The Ethics and Legality of Generative AI: A Critical Analysis of the TDM Exception

    A recent academic article by Tim W. Dornis has sparked an important debate about the legal foundations of generative AI, particularly regarding how these systems are trained. The article meticulously analyzes why the Text and Data Mining (TDM) exception in EU copyright law cannot apply to generative AI training. This analysis opens up broader questions about the ethical and legal framework surrounding AI development. The Legal Disconnect The article demonstrates that generative AI training fundamentally differs from traditional text and data mining. While TDM aims to extract information and discover patterns, generative AI systems are designed to create outputs that compete with original works. This distinction is crucial because it reveals a significant legal gap: we're trying to force new technology into existing legal frameworks that were never designed to accommodate it. Consider the example of Google's Smart Reply feature discussed in the article. The system only became convincing after training on creative works like novels - not just analyzing patterns, but actually incorporating expressive elements. This shows how generative AI goes far beyond mere data mining, raising serious questions about copyright infringement. The Broader Implications This legal analysis reveals a deeper truth about our approach to AI regulation. We can't simply retrofit existing laws to handle the unprecedented challenges posed by generative AI. The technology is too transformative and affects too many aspects of society to be regulated through piecemeal adjustments to existing frameworks. The Need for Ethical Innovation The current wave of generative AI development seems primarily driven by profit motives and the race to market. Companies like OpenAI have prioritized rapid deployment over careful consideration of legal and ethical implications. If the article's analysis is correct, we might be witnessing what could amount to the largest-scale copyright infringement in history - all in the name of innovation. This raises a crucial question: Do we want technology that's built on legally questionable foundations, or should we demand more ethical approaches to AI development? While innovation is important, it shouldn't come at the cost of fundamental legal and ethical principles. A Call for Interdisciplinary Approach The complexity of these issues demonstrates why AI development cannot be left to technologists alone. We need input from legal scholars, ethicists, sociologists, and other experts to ensure AI development serves society's best interests. Had companies like OpenAI consulted more diverse experts before launching their products, they might have taken a more measured approach to development and deployment. The Way Forward The article's technical legal analysis points to a broader truth: sometimes technology needs to adapt to legal and ethical frameworks, not the other way around. While laws can and should evolve with technology, core principles like copyright protection exist for good reasons. Instead of finding ways to circumvent these principles, we should be asking how to develop AI systems that respect them from the ground up. We need a new philosophical and ethical framework for AI development - one that prioritizes societal benefit over mere productivity gains. This means having difficult conversations about the trade-offs between rapid innovation and responsible development. The Challenge Ahead As we stand at this crucial juncture in technological development, we must make conscious choices about the kind of future we want to create. Do we want to prioritize quick technological advances at any cost, or should we take a more measured approach that ensures our innovations align with our legal and ethical principles? The answer to this question will shape not just the future of AI, but the future of human creativity and innovation itself. It's time for a broader, more inclusive dialogue about how we develop and deploy these powerful technologies. Source: THE TRAINING OF GENERATIVE AI IS NOT TEXT AND DATA MINING by Tim W. Dornis   doi.org/10.5771/9783748949558

  • AI Governance: A Path Forward Based on International Policy Frameworks

    The rapid advancement of artificial intelligence has sparked intense debate about how to ensure this powerful technology benefits society while mitigating potential risks. Recent months have seen major international organizations stepping up with comprehensive frameworks attempting to address this challenge. The OECD's report on AI risks and benefits offers concrete policy recommendations, while the UN and UNESCO frameworks provide complementary perspectives on ensuring equitable AI development and protecting human rights. As someone who has spent years working on AI governance in both corporate and regulatory environments, I find these frameworks particularly significant. They represent the a serious attempt at creating a coordinated international approach to AI governance. However, the key question remains: how do we translate these high-level frameworks into practical, effective regulation? In this post, I'll examine three fundamental questions that emerge from analyzing these frameworks: what are the actual benefits and risks of AI that we need to consider? Why do these necessitate regulation? And most importantly, what concrete steps should we take to regulate AI effectively? My aim is to move beyond theoretical discussions to outline a practical path forward for AI governance. The Dual Nature of AI: Benefits and Risks AI's potential benefits and risks present us with a classic double-edged sword scenario. Based on the comprehensive analysis in these reports, here's what we're really dealing with: Transformative Benefits Scientific Progress : AI is already accelerating research and innovation across fields, from drug discovery to climate change solutions Economic Growth : Significant productivity gains and improved living standards are possible through AI adoption Social Impact : Better healthcare, education, and decision-making could reduce inequality and improve quality of life Environmental Solutions : AI could be crucial in addressing climate change and other complex global challenges Critical Risks Power Concentration : The technology's benefits could accumulate among a small number of companies or countries Security Threats : AI enables more sophisticated cyber attacks and could compromise critical infrastructure Social Disruption : From job displacement to privacy invasion and surveillance, AI could fundamentally alter social structures Democracy and Rights : Misinformation, manipulation, and erosion of privacy rights threaten democratic processes The Case for Regulation The interplay between these benefits and risks makes regulation not just necessary but crucial. Here's why: Market Forces Aren't Enough : The race to develop and deploy AI systems often prioritizes speed over safety and ethical considerations. Without regulation, companies might cut corners on safety and ethical considerations to gain competitive advantages. Global Impact Requires Global Response : AI's effects cross borders - from data flows to economic impacts. Uncoordinated or fragmented regulatory approaches create gaps that could be exploited. Protecting Public Interest : Many AI risks affect fundamental rights and societal structures. Only regulation can ensure these interests are protected while allowing innovation to flourish. Trust and Adoption : Clear regulatory frameworks build public trust and provide certainty for businesses, actually enabling faster and more sustainable AI adoption. The Path Forward: A Framework for Effective AI Regulation Based on the analysis of international frameworks and practical experience, here's how we should approach AI regulation: 1. Adopt a Layered Regulatory Approach Foundation Layer : Establish clear principles and red lines for unacceptable uses of AI Risk Layer : Implement graduated requirements based on AI system risk levels Sector Layer : Add specific requirements for sensitive sectors (healthcare, finance, etc.) 2. Focus on Key Governance Mechanisms Mandatory Risk Assessments : Require thorough evaluation of high-risk AI systems before deployment Transparency Requirements : Implement clear disclosure rules about AI system capabilities and limitations Accountability Framework : Establish clear liability rules for AI-related harms Safety Standards : Develop technical standards for AI system safety and reliability 3. Build International Coordination Harmonized Standards : Work toward internationally recognized standards for AI development and deployment Information Sharing : Create mechanisms for sharing threat intelligence and best practices Collaborative Enforcement : Establish frameworks for cross-border enforcement cooperation 4. Ensure Adaptability Regular Review : Build in mechanisms to regularly assess and update regulatory frameworks Regulatory Sandboxes : Create safe spaces for testing new AI applications and regulatory approaches Feedback Loops : Establish systems to incorporate lessons learned from implementation Making It Work: Implementation Priorities To make this framework effective, three immediate priorities emerge: Build Capacity Invest in regulatory expertise and technical capabilities Develop assessment methodologies and tools Create training programs for regulators and compliance officers Foster Collaboration Create public-private partnerships for standard development Establish international coordination mechanisms Build stakeholder engagement platforms Enable Innovation Provide clear guidance for compliance Create fast-track approval processes for low-risk applications Support research in AI safety and ethics Looking Forward What's abundantly clear from these frameworks is that we can't afford to wait for perfect solutions. The path to effective AI governance isn't about choosing between innovation and protection – it's about creating practical frameworks that enable both while remaining flexible enough to evolve with the technology. The foundations are there in these international frameworks. Now we need to take concrete action to implement them, keeping in mind that governance mechanisms must be as agile and adaptive as the technology they regulate. This will require unprecedented cooperation between governments, industry, and civil society, but the potential benefits of getting this right – and the risks of getting it wrong – make this effort essential.

  • The Future of AI Regulation: Finding Balance Between Innovation and Protection

    As an IT law expert specializing in AI governance, I've been closely analyzing recent landmark documents from major international organizations that are shaping the future of AI regulation. The latest OECD report on potential AI risks and benefits, combined with recent UN and UNESCO frameworks, provides crucial insights into how we should approach AI governance globally. Let me share my perspective on what these mean for the future of AI regulation. The Current State: Beyond Fragmented Regulation The current state of AI regulation resembles a complex patchwork, with over thirty countries having passed AI-specific laws since 2016. However, as the OECD report emphasizes, this fragmented approach isn't sufficient for managing the rapid evolution of AI capabilities. We need coordinated international action to address both immediate challenges and longer-term implications. Key Priorities Emerging from International Frameworks From my analysis of these documents, several critical priorities emerge that any effective AI governance framework must address: 1. Balancing Innovation with Safety and Human Rights The OECD report identifies ten priority risks that require immediate attention, from cybersecurity threats to privacy concerns. However, it also highlights ten potential benefits that could transform society positively. The challenge lies in creating governance frameworks that mitigate risks while fostering innovation. The UNESCO framework demonstrates how this can be achieved through complementary regulatory approaches, while the OECD proposes specific policy actions to ensure responsible AI development. 2. Addressing Power Concentration and Digital Divide A common thread across all three reports is the concern about power concentration in AI development. The UN report notes that none of the top 100 high-performance computing clusters is hosted in developing countries. The OECD reinforces this concern, identifying market concentration as a key risk and proposing specific measures to promote fair competition and broader access to AI capabilities. 3. Creating Agile and Adaptive Governance Mechanisms The OECD report particularly emphasizes the need for governance mechanisms that can keep pace with rapid AI evolution. It proposes innovative approaches such as: Risk management procedures for high-risk AI systems International cooperation frameworks for AI safety Mechanisms for stakeholder engagement and transparency Regular assessment and updating of governance frameworks Practical Steps Forward Based on these frameworks, I recommend organizations and governments focus on the following practical steps: 1. Implement Comprehensive Risk Management The OECD's proposed risk management framework provides a practical starting point. Organizations should: Conduct regular AI impact assessments Establish clear liability frameworks Implement robust safety and security measures Maintain transparent documentation of AI systems 2. Foster International Collaboration All three reports emphasize the importance of international cooperation. Key actions should include: Participating in international AI safety initiatives Sharing best practices and lessons learned Contributing to global AI governance frameworks Supporting capacity building in developing nations 3. Invest in Education and Capacity Building The OECD framework specifically highlights the importance of education and reskilling. Organizations should: Develop comprehensive AI literacy programs Invest in workforce reskilling initiatives Support research in AI safety and ethics Promote inclusive AI development Looking Ahead The path forward requires a delicate balance between innovation and protection. The OECD's identification of specific policy actions provides a practical roadmap, while the UN and UNESCO frameworks offer broader principles to guide implementation. The key to success will be maintaining flexibility while ensuring robust protection of human rights and societal interests. As we continue this journey, maintaining open dialogue between stakeholders while ensuring human rights remain at the center of our regulatory frameworks will be crucial. Most importantly, we must remember that AI governance is not a destination but a journey. As the OECD report emphasizes, we need continuous assessment and adaptation of our approaches as AI technology evolves. This requires ongoing commitment from all stakeholders and a willingness to adjust our frameworks based on emerging evidence and experiences.

  • The Ethical and Legal Implications of Social Scoring Systems

    In a world increasingly reliant on artificial intelligence (AI), social scoring systems represent a complex intersection of technology, governance, and human rights. These systems, designed to regulate behaviour by assigning scores based on actions, are lauded for their potential to incentivize trust and cooperation but criticized for their capacity to erode fundamental freedoms and privacy. What Are Social Scoring Systems? Social scoring systems use algorithms to assign individual scores based on their behaviour, intending to foster societal norms like trust and fairness. These systems have applications ranging from environmental compliance to community governance. However, their implementation often raises significant ethical and legal questions, particularly regarding discrimination, transparency, and individual autonomy. Legal Frameworks and the EU AI Act The European Union's AI Act addresses the risks posed by AI, including social scoring systems. Recital 31 and Article 5 of the Act explicitly classify social scoring as a practice that creates "unacceptable risks" when used to evaluate individual trustworthiness in ways that could lead to unjustified disparities. The Act emphasizes the principle of "contextual integrity," which mandates that scores must be used only within the domain they were generated. However, operationalizing this principle is challenging, and enforcement gaps persist. The Act also imposes transparency requirements for high-risk AI systems, aiming to allow users to understand and appropriately respond to a system's outputs. Yet, as studies show, transparency can have dual effects: while it enhances trust and perceived fairness, it may also exacerbate discrimination against low-scoring individuals, as people use transparent systems to justify punitive actions. Transparency: Double-Edged Sword? Empirical research demonstrates that transparency in social scoring systems can lead to increased trust and equitable outcomes within communities. Transparent systems are perceived as more legitimate and procedurally just, enabling individuals to align their behaviours with community norms more effectively. For instance, in experiments simulating community trust games, transparent scoring mechanisms led to higher levels of wealth generation and lower inequality among participants. However, transparency can also impose significant harms. It enables "reputational discrimination," where individuals with lower scores face social ostracization or limited opportunities. This highlights the inherent tension between promoting fairness at a societal level and protecting individual rights. Ethical Concerns and Broader Implications Beyond transparency, social scoring systems pose ethical challenges related to privacy, agency, and surveillance. The automated nature of these systems often leads to opaque decision-making processes that individuals cannot contest. This lack of accountability risks entrenching systemic biases and disproportionately affecting marginalized groups. Furthermore, these systems may inadvertently create surveillance ecosystems. For example, real-time biometric monitoring combined with social scoring could normalize mass surveillance, undermining public trust and stifling dissent. This concern is particularly relevant in jurisdictions where legal safeguards against misuse are weak or non-existent. The Need for Robust Regulation The EU AI Act, despite its comprehensive approach, falls short in addressing the full scope of challenges posed by social scoring systems. It fails to enforce strict prohibitions on harmful use cases like real-time biometric tracking by private entities, leaving room for exploitation. Stronger global frameworks are necessary to mitigate risks and ensure these systems align with ethical principles. Conclusion Social scoring systems exemplify the power and peril of algorithmic governance. While they hold promise for fostering social cooperation, their potential to erode civil liberties and amplify inequities cannot be overlooked. Striking a balance between innovation and human rights is critical. Governments, technologists, and civil society must work together to ensure that the implementation of these systems respects transparency, fairness, and individual dignity. In the digital age, the rules we establish for AI will shape the future of freedom and fairness. It is imperative that we get them right. Sources: https://epic.org/what-u-s-regulators-can-learn-from-the-eu-ai-act/ https://ojs.aaai.org/index.php/AIES/article/view/31690 The EU AI act: https://ai-act-law.eu/

  • Data minimization vs. bias prevention

    The EU AI Act and the GDPR have a few intersections that would be fascinating to follow, none more than Art. 10 of the AI Act, which deals with data governance, specifically its paragraph 5. This provision allows processing of special categories of personal data under strict conditions in order to ensure that the AI systems are trained on diverse datasets, that include for example people with various backgrounds. The purpose is to avoid discriminative practices stemming from the usage of AI systems that are trained on lopsided datasets.   What is particularly interesting for me here is the "clash" between two (at least in my eyes) equally important rights of persons - one is the right to (data) privacy and the other is the right to equal treatment. Both of those rights are strongly connected to human dignity, so in some cases, I believe the courts and lawyers would have a difficult time making a fair decision as to which of these right prevails.   For example, I wouldn't want my health data to be used for training purposes by a random system provider B. At the same time, if I have a chronic condition A, which is very rare, and my hospital uses the AI system of service provider B to establish emergency access protocols, so that the AI system helps with the decision making in terms of who gets treatment first, the fact that my health data was not part of the training may become life-threatening in some circumstances - I could be denied priority access based on the fact that the condition is not present in the dataset. On the other side, if there is a cybersecurity breach in the servers of service provider B and the data about my conditions gets leaked, it may fall into the hands of specific actors that may target people who have the same condition as me based on prejudice or whatever other reason, which might also endanger my well-being in a significant manner.   Therefore, enterprises who fall under Art. 10 (meaning providers of high-risk AI systems) would need to (1) establish very strict security and cybersecurity measures for sensitive personal data, (2) whenever possible, not use personal data at all, or use measures to anonymize/pseudonimize the data, so that in my example above the condition itself is present in the data set, but my name is not attached to it, (3) make sure they have other measures built in their systems, which allow for some degree of human control and decision-making power in critical situations and (4) not transfer or give access to personal data to anyone outside of their organization.   Data governance should, similarly to the whole AI governance, include various stakeholders from the respective organizations, because in such complex situations, having different points of view is essential. Technical, business, compliance, legal people need to be involved in all important decision-making in regard of the AI system throughout its lifecycle, otherwise the risks for the companies could be enormous.   Fair and trustworthy AI systems, which do not force us to choose the lesser of two "evils" will only be possible through thoughtful, responsible, ethical and consistent governance practices. Otherwise the risk both for the people and the companies would be too big.

Get to Know Us

bottom of page