top of page

AI governance grounded in research, shaped by legal expertise, guided by ethics.

A platform for evidence-based AI governance analysis, practical implementation frameworks, and ethical strategies that drive competitive advantage.

Why this site exists?

Most AI governance discussion is driven by vendor marketing or disconnected from implementation reality. This platform bridges that gap.

Here you'll find:

Evidence-based analysis - Research on what actually works, not what's promised. OECD data, MIT studies, and real performance metrics.

Legal insight - EU AI Act compliance, US regulatory complexity, and litigation trends analyzed from active legal practice.

Ethical frameworks in action - Ethics isn't just compliance—it's a competitive advantage. Case studies showing how companies like Apple and Microsoft win with ethical AI, while others lose billions through ethical failures. Practical guidance on turning privacy, fairness, and transparency into market differentiation.

Implementation guidance - Practical tools for building governance that survives contact with reality, with ethics integrated from the start.

AI Governance Implementation Training

Learn to build governance frameworks that work in practice. This full-day course combines legal expertise, real case studies, and ethical implementation strategies for AI governance in organizations.

Sounds interesting?

Learn more about The Course on The Course's page

What you'll get?

Evidence-Based Analysis

Learn why implementations fail and how to avoid common pitfalls through real data and case studies.

Ethics as Competitive Advantage

Detailed case studies showing how ethical AI drives success (Apple, Microsoft) and how failures destroy value (Anthropic, Twitter/X's $60B loss).

NIST-Based Frameworks

Practical frameworks using NIST guidelines with ethics integration you can implement immediately.

46-Page Comprehensive Handbook

Ready-to-use templates, checklists, and assessment tools you can apply in your organization.

Seven Key Challenges

Solutions for the critical obstacles AI governance professionals face in real implementations.

Field-Tested Governance

From someone currently implementing these strategies in real organizations, with ethics at the core.

Articles

14 results found with an empty search

  • AI's Role in Law: Drawing Clear Boundaries for Democratic Society

    This article deals with the relationship between legal work and artificial intelligence from the perspective of a person who has been working in AI governance and is a legal professional. It aims to explore what impact do generative AI tools specifically have on the legal profession and what part of any judicial work should be done by machines. It focuses on the different aspects of the legal work that GenAI could influence and asks the question what limitations should be imposed on AI in law. I argue that law should remain an exclusively human domain when it comes to core legal functions - drafting, enacting, interpretation, and judgment. While legal professionals may leverage AI tools, their use must be carefully bounded. Technical Limitations of AI in Legal Understanding LLMs fundamentally don't understand the context and the underlying social, political, and economic reasons for laws, judgments, and precedents. Laws and regulations are created based on political platforms - minimum wages, employee protection, and data privacy are examples of "social state" politics that are based on specific values. While LLMs are trained on vast amounts of data, they cannot comprehend how policies are created and formed in people. Humans tend to have the ability to make connections between seemingly unrelated topics in a way machines cannot do that. AI relies mostly on a statistical analysis of the data, whereas humans draw upon complex contextual and "natural" understanding of how society functions to make their decisions or recommendations.   This human ability is particularly relevant in law, because it involves detailed understanding of language, politics, philosophy, ethics and many other spheres of life. Good lawyers have a reasonable chance of getting the consequences of complex situations right, while AI is nowhere near capable enough to be trusted with such judgements. My own experience with the major GenAI tools shows me that they are like a rookie paralegal that knows everything, but cannot make any connections between their knowledge and real-world issues. And that is a limitation that does not look like it's going to be solved any time soon.   Verification of AI's legal reasoning presents another significant challenge. Legal questions may have different answers in the same jurisdiction depending on small changes in facts, let alone across multiple jurisdictions. Recent cases demonstrate these limitations - for instance, ChatGPT's poor performance in straightforward trademark matters shows that even basic legal reasoning remains beyond AI's current capabilities In a court case in the Netherlands, it showed that it is inconsistent in its reasoning and it could not interpret law in a way that is in any way good enough for a real-life situation before a judge. Source: https://www.scottishlegal.com/articles/dr-barry-scannell-chatgpt-lawyer-falls-short-of-success-in-trademark-case      Democratic and Accountability Framework The transparency and accountability implications of AI in law are profound. While humans make mistakes in creating laws, they can be held accountable both politically and legally. Democratic society enables discussions on important topics that expose the motives behind politicians' decisions. As AI systems become more complex, their decision-making becomes increasingly opaque. Since machines cannot (yet) be held accountable in the same way humans can, delegating important decision-making to artificial intelligence would undermine democratic principles. Furthermore, considering how a few enormous companies have taken control over AI technology, allowing AI into core legal functions would effectively place our legal system in their hands - a shift that threatens democratic principles and risks creating an authoritarian framework. Especially now seeing how X and Meta are approaching the issue of "freedom of speech", it becomes abundantly clear that decisions critical for society are not made via representative of the people, they are made by private actors with no regard for democracy or rule of law.   The Human Nature of Law Law, as part of the humanities, is fundamentally about understanding the human condition - something only biological beings can truly comprehend. Legal professionals develop crucial skills and insights through their research and practice that no AI system can replicate. Just as calculator dependence has diminished arithmetic skills, over-reliance on AI in legal work would be detrimental to future generations of lawyers, but with far more serious consequences for society. Legal work is not only like the movies, where a brilliant lawyer argues a case spectacularly in front of judge. The majority of the legal work is done outside of courtrooms and it's important to understand it's crucial part of a democratic society. Outsourcing critical legal decisions to a non-human entity means that we would be outsourcing one of the most critical pillars of democracy - namely people's rights and freedoms - to machines that, as we have established already, have no real understanding of the world. They don't have the concept of fairness, equity and equality coded into them. Furthermore, leaving important decisions to AI systems means that we basically are handling over the judicial systems to the companies creating those systems, creating a huge hole in our democratic oversight. AI systems may produce biased or unclear answer, due to their overreliance on training data and the (very) possible lack of explainability of the decision-making.   Defining Clear Boundaries for AI in Legal Work The role of AI in legal work should follow a clear hierarchical structure: Pure Administrative Tasks (Full AI Utilization Permitted): Document filing and organization Basic text editing and formatting Calendar management Simple template filling Repository maintenance Hybrid Tasks (AI as Transparent Assistant): Legal research combining search and preliminary analysis Initial contract review for standard terms Template creation for complex legal documents Case law database management For these tasks, AI must provide clear documentation of its methodology, sources, and reasoning. Legal professionals retain oversight and decision-making authority, with AI serving as a tool for initial analysis rather than final decisions. Pure Legal Work (Exclusively Human Domain): Legal strategy development Final interpretation of laws and precedents Judicial decision-making Complex negotiation Legislative drafting   These boundaries would help legal professionals to take advantage of the new technologies and use them to become better at their work. On the other hand, vital democratic and judicial principles will still be upheld and accountability of decision-making and algorithmic bias will be dealt with. And on the topic of bias - while both humans and AI systems exhibit biases, human biases can be addressed through training, conscious effort, and professional ethics frameworks. Human decision-making processes can be scrutinized and challenged through established legal and professional channels - something not possible with AI systems. Moreover, automation bias could lead lawyers to overly rely on AI suggestions, potentially missing novel legal arguments or interpretations that might better serve justice.   Conclusion The debate around AI in legal systems isn't just about technological capabilities - it's about the future of democratic governance itself. The limitations of AI in legal contexts extend far beyond technical constraints to touch the very foundations of how we create and maintain just societies. If we allow AI to penetrate core legal functions, we risk creating a dangerous precedent where complex human decisions are delegated to unaccountable systems. Instead of asking how we can integrate AI into legal decision-making, we should be asking how we can leverage AI to support and enhance human legal expertise while maintaining clear boundaries. This means developing explicit frameworks that define where AI assistance ends and human judgment must prevail. The legal profession has an opportunity - and responsibility - to lead by example in showing how emerging technologies can be embraced thoughtfully without compromising the essentially human nature of professional judgment. The future of law doesn't lie in AI replacement, but in human professionals who understand both the potential and limitations of AI tools - and who have the wisdom to keep them in their proper place.

  • AI without ethics stifles innovation

    Innovation can be enhanced by ethics and law While legal instruments like the EU AI Act, US and Chinese regulations, as well as governance frameworks like the NIST AI Framework focus on managing risks from existing AI systems, they often operate reactively—addressing problems after technologies have been deployed. This article proposes a fundamental shift: integrating ethics into the earliest stages of AI development rather than merely imposing restrictions afterward. By examining historical lessons from pharmaceutical, nuclear, and biotechnology governance, I argue that AI systems developed without foundational ethical frameworks cannot truly benefit society. More importantly, I propose concrete mechanisms to transform governance from a protective measure into an innovative force that rewards ethical foresight. There are many governance frameworks and now also laws already in existence, and all of them focus on managing the risk from AI systems in one way or the other - the EU AI Act is a product safety regulation, the NIST AI framework creates a guideline risk management, the White House Executive Order has a similar objective. Their idea is protection. It's a noble idea and it is indeed absolutely necessary. But it deals with the risks post factum. Most of these frameworks work under the assumption that e.g. ChatGPT is already there, it is being used and we need to somehow frame it. I would like to shift to a different perspective - a more philosophical and ethical one, however with very practical consequences: do we consider why we create AI and how is it aligned ethically with our values? I have not seen this question being asked in most frameworks, because they are meant to be practical and help the businesses grow their already created products. But what if we ask those questions BEFORE a product is created? What if we introduce a basic requirement that an AI use case must be ethically vetted, before it's creation, before the data is being trained, before parameters and weights are being introduced? Olivia Gambelin has already talked about this in her course "AI Ethics" (which I highly recommend) - ethics can indeed be used as an innovation tool AT THE START OF the creation of a new technology. She gives the example of the app "Signal", which uses a privacy by design concept, so before the product was created, it already included the ethical consideration that other similar communication services like WhatsApp are lacking. And that got me thinking: shouldn't all companies actually do that? Should we encourage our innovators to be ethical to begin with, not only scare them with fines when they misbehave, but reward them when they do well from the start? My point is that we need to really explore way more ethical and philosophical discussions about AI. Think about it: what if regulation was not there only to protect, but to incentivize companies to include ethical and philosophical considerations from the start? For example, introduce a basic new step in the AI Act that requires companies that create or use AI systems to look not only at the efficiency gains of their business model, but at the consequences the systems might have for society. What if regulation was not there only to protect, but to incentivize companies to include ethical considerations from the start? That could include an "Ethics Review Boards" within the AI Office that has real regulatory power and that can require companies to perform "social impact assessment" of their product at a very early stage of development. A company can show that the results of such assessment are a socially desirable and positive outcome (e.g. privacy by design like Signal; company reveals its datasets, proving its variety and showing how bias is tackled at the first steps of development; a chatbot that discloses all the copyrighted material used and provides evidence that it has licensed all those materials). The reward for such behavior could be a certification of excellence by the EU, certifying that this product is of the highest quality and is recommended for usage compared to other similar products that are only compliant with the strictest requirements, but have not done anything on the ethical side. This is not unheard of - the pharma industry, biotechnology and nuclear technology have all undergone a similar development in the past. All of these industries and technologies started off without much or any specific regulation, then either a huge disaster was caused by them (e.g. Hiroshima/Nagasaki for nuclear energy; the Elixir sulfanilamide disaster in 1937 for pharma was a similar event) or questions were raised soon enough in the case of biotechnology, so that social and ethical concerns were taken into account as part of its development. After such events, governance efforts started and now all of these three examples I gave are highly-regulated industries and society is benefiting from that (e.g. better healthcare and medicine; higher quality research in biotech; nuclear disarmament). All these examples show a pattern in how transformative technology is normally governed: there is an initial phase of rapid and uncontrolled development, then often enough disaster strikes and then there is reactive regulation, which develops into a mature governance process. I argue that for AI, we need to be proactive, and so regulations like the EU AI Act are actually a good thing in principle, even if the details may not yet be that great. The Asilomar Conference of 1975 provides a compelling precedent for proactive ethical governance. When scientists first developed the ability to splice DNA between organisms, they didn't wait for a disaster before establishing guidelines. Instead, leading researchers voluntarily paused their work and established a framework for safe development. This didn't slow progress – it enabled it by building public trust and clear guidelines. Following the Asilomar model, companies developing AI systems could (1) establish clear ethical review processes before beginning development; (2) create transparent documentation of ethical considerations; (3) engage multiple stakeholders in review processes; (4) define clear boundaries for development. Such process would take into account ethical and social concerns which show us in a very specific way what the added value of a given AI system is. The question whether an "intelligent" system is actually contributing to a better society, or is it a cash cow exercise can become much easier to answer. The aforementioned Ethics Review Board could be a part of the AI Office within its responsibility of ensuring Trustworthy AI. It should be staffed by a variety of experts from different fields, focusing on humanities, with the technical experts active in other areas of the AI Office, such as benchmarks creation. It would issue binding recommendations that the AI Office will publish and that should provide the requirements for companies to get a "certificate of excellence". I assume an adjustment of the AI Act would be needed in order to have all the details right. However, I believe that this is a noteworthy initiative, considering all the factors and the ultimate benefits for society. An initial ethical and philosophical review of our systems would not only be better for the majority of people, it would actually be advantageous for the business itself - it has already been shown that ethics can create a competitive advantage and brand loyalty towards a product (Signal vs. WhatsApp is one example - after WhatsApp released its updated T&Cs a few years ago, Signal got millions of new users within a few days due to its stringent approach to data privacy). Also, considering ethical issues at the start helps a lot with compliance, as regulations are merely the foundation upon which an ethical product is built. If you, as a company, think about risks and issues from the start, it is highly likely that you will be easily compliant with any laws, since they deal with exactly that - risks. If, on top of that, you go beyond the mere compliance and create a product that not only fulfills the basic requirements, but enhances protection of rights and combines that with a good user experience and added value - that would mean lower compliance costs in the future due to the robust ethical framework; enhanced trust in the product; and future-proofing against evolving regulations. What all this shows is that innovation isn't limited to technology itself—it extends to how we govern and regulate any new and impactful systems like AI. A thoughtful, balanced approach to AI policy will not only protect citizens more effectively but create opportunities for sustainable growth and competitive advantage. For practitioners and policymakers alike, the path forward is clear: ethical considerations must become the foundation of AI development, not an afterthought. As the AI governance landscape evolves, those organizations that embrace proactive ethical frameworks today will not only avoid regulatory headaches tomorrow but will build more trustworthy, valuable products that stand the test of time. The question we must ask ourselves is not whether we can afford to integrate ethics from the start, but whether we can afford not to.

  • The Ethics and Legality of Generative AI: A Critical Analysis of the TDM Exception

    A recent academic article by Tim W. Dornis has sparked an important debate about the legal foundations of generative AI, particularly regarding how these systems are trained. The article meticulously analyzes why the Text and Data Mining (TDM) exception in EU copyright law cannot apply to generative AI training. This analysis opens up broader questions about the ethical and legal framework surrounding AI development. The Legal Disconnect The article demonstrates that generative AI training fundamentally differs from traditional text and data mining. While TDM aims to extract information and discover patterns, generative AI systems are designed to create outputs that compete with original works. This distinction is crucial because it reveals a significant legal gap: we're trying to force new technology into existing legal frameworks that were never designed to accommodate it. Consider the example of Google's Smart Reply feature discussed in the article. The system only became convincing after training on creative works like novels - not just analyzing patterns, but actually incorporating expressive elements. This shows how generative AI goes far beyond mere data mining, raising serious questions about copyright infringement. The Broader Implications This legal analysis reveals a deeper truth about our approach to AI regulation. We can't simply retrofit existing laws to handle the unprecedented challenges posed by generative AI. The technology is too transformative and affects too many aspects of society to be regulated through piecemeal adjustments to existing frameworks. The Need for Ethical Innovation The current wave of generative AI development seems primarily driven by profit motives and the race to market. Companies like OpenAI have prioritized rapid deployment over careful consideration of legal and ethical implications. If the article's analysis is correct, we might be witnessing what could amount to the largest-scale copyright infringement in history - all in the name of innovation. This raises a crucial question: Do we want technology that's built on legally questionable foundations, or should we demand more ethical approaches to AI development? While innovation is important, it shouldn't come at the cost of fundamental legal and ethical principles. A Call for Interdisciplinary Approach The complexity of these issues demonstrates why AI development cannot be left to technologists alone. We need input from legal scholars, ethicists, sociologists, and other experts to ensure AI development serves society's best interests. Had companies like OpenAI consulted more diverse experts before launching their products, they might have taken a more measured approach to development and deployment. The Way Forward The article's technical legal analysis points to a broader truth: sometimes technology needs to adapt to legal and ethical frameworks, not the other way around. While laws can and should evolve with technology, core principles like copyright protection exist for good reasons. Instead of finding ways to circumvent these principles, we should be asking how to develop AI systems that respect them from the ground up. We need a new philosophical and ethical framework for AI development - one that prioritizes societal benefit over mere productivity gains. This means having difficult conversations about the trade-offs between rapid innovation and responsible development. The Challenge Ahead As we stand at this crucial juncture in technological development, we must make conscious choices about the kind of future we want to create. Do we want to prioritize quick technological advances at any cost, or should we take a more measured approach that ensures our innovations align with our legal and ethical principles? The answer to this question will shape not just the future of AI, but the future of human creativity and innovation itself. It's time for a broader, more inclusive dialogue about how we develop and deploy these powerful technologies. Source: THE TRAINING OF GENERATIVE AI IS NOT TEXT AND DATA MINING by Tim W. Dornis   doi.org/10.5771/9783748949558

bottom of page