top of page

AI without ethics stifles innovation

  • Writer: Petko Getov
    Petko Getov
  • May 4
  • 6 min read


Innovation can be enhanced by ethics and law
Innovation can be enhanced by ethics and law

While legal instruments like the EU AI Act, US and Chinese regulations, as well as governance frameworks like the NIST AI Framework focus on managing risks from existing AI systems, they often operate reactively—addressing problems after technologies have been deployed. This article proposes a fundamental shift: integrating ethics into the earliest stages of AI development rather than merely imposing restrictions afterward. By examining historical lessons from pharmaceutical, nuclear, and biotechnology governance, I argue that AI systems developed without foundational ethical frameworks cannot truly benefit society. More importantly, I propose concrete mechanisms to transform governance from a protective measure into an innovative force that rewards ethical foresight.


There are many governance frameworks and now also laws already in existence, and all of them focus on managing the risk from AI systems in one way or the other - the EU AI Act is a product safety regulation, the NIST AI framework creates a guideline risk management, the White House Executive Order has a similar objective. Their idea is protection. It's a noble idea and it is indeed absolutely necessary. But it deals with the risks post factum. Most of these frameworks work under the assumption that e.g. ChatGPT is already there, it is being used and we need to somehow frame it.


I would like to shift to a different perspective - a more philosophical and ethical one, however with very practical consequences: do we consider why we create AI and how is it aligned ethically with our values? I have not seen this question being asked in most frameworks, because they are meant to be practical and help the businesses grow their already created products. But what if we ask those questions BEFORE a product is created? What if we introduce a basic requirement that an AI use case must be ethically vetted, before it's creation, before the data is being trained, before parameters and weights are being introduced?


Olivia Gambelin has already talked about this in her course "AI Ethics" (which I highly recommend) - ethics can indeed be used as an innovation tool AT THE START OF the creation of a new technology. She gives the example of the app "Signal", which uses a privacy by design concept, so before the product was created, it already included the ethical consideration that other similar communication services like WhatsApp are lacking. And that got me thinking: shouldn't all companies actually do that? Should we encourage our innovators to be ethical to begin with, not only scare them with fines when they misbehave, but reward them when they do well from the start?


My point is that we need to really explore way more ethical and philosophical discussions about AI. Think about it: what if regulation was not there only to protect, but to incentivize companies to include ethical and philosophical considerations from the start? For example, introduce a basic new step in the AI Act that requires companies that create or use AI systems to look not only at the efficiency gains of their business model, but at the consequences the systems might have for society.

What if regulation was not there only to protect, but to incentivize companies to include ethical considerations from the start?

That could include an "Ethics Review Boards" within the AI Office that has real regulatory power and that can require companies to perform "social impact assessment" of their product at a very early stage of development. A company can show that the results of such assessment are a socially desirable and positive outcome (e.g. privacy by design like Signal; company reveals its datasets, proving its variety and showing how bias is tackled at the first steps of development; a chatbot that discloses all the copyrighted material used and provides evidence that it has licensed all those materials).


The reward for such behavior could be a certification of excellence by the EU, certifying that this product is of the highest quality and is recommended for usage compared to other similar products that are only compliant with the strictest requirements, but have not done anything on the ethical side.


This is not unheard of - the pharma industry, biotechnology and nuclear technology have all undergone a similar development in the past. All of these industries and technologies started off without much or any specific regulation, then either a huge disaster was caused by them (e.g. Hiroshima/Nagasaki for nuclear energy; the Elixir sulfanilamide disaster in 1937 for pharma was a similar event) or questions were raised soon enough in the case of biotechnology, so that social and ethical concerns were taken into account as part of its development.


After such events, governance efforts started and now all of these three examples I gave are highly-regulated industries and society is benefiting from that (e.g. better healthcare and medicine; higher quality research in biotech; nuclear disarmament). All these examples show a pattern in how transformative technology is normally governed: there is an initial phase of rapid and uncontrolled development, then often enough disaster strikes and then there is reactive regulation, which develops into a mature governance process. I argue that for AI, we need to be proactive, and so regulations like the EU AI Act are actually a good thing in principle, even if the details may not yet be that great.


The Asilomar Conference of 1975 provides a compelling precedent for proactive ethical governance. When scientists first developed the ability to splice DNA between organisms, they didn't wait for a disaster before establishing guidelines. Instead, leading researchers voluntarily paused their work and established a framework for safe development. This didn't slow progress – it enabled it by building public trust and clear guidelines.


Following the Asilomar model, companies developing AI systems could

(1) establish clear ethical review processes before beginning development;

(2) create transparent documentation of ethical considerations;

(3) engage multiple stakeholders in review processes;

(4) define clear boundaries for development.


Such process would take into account ethical and social concerns which show us in a very specific way what the added value of a given AI system is. The question whether an "intelligent" system is actually contributing to a better society, or is it a cash cow exercise can become much easier to answer.


The aforementioned Ethics Review Board could be a part of the AI Office within its responsibility of ensuring Trustworthy AI. It should be staffed by a variety of experts from different fields, focusing on humanities, with the technical experts active in other areas of the AI Office, such as benchmarks creation. It would issue binding recommendations that the AI Office will publish and that should provide the requirements for companies to get a "certificate of excellence". I assume an adjustment of the AI Act would be needed in order to have all the details right. However, I believe that this is a noteworthy initiative, considering all the factors and the ultimate benefits for society.


An initial ethical and philosophical review of our systems would not only be better for the majority of people, it would actually be advantageous for the business itself - it has already been shown that ethics can create a competitive advantage and brand loyalty towards a product (Signal vs. WhatsApp is one example - after WhatsApp released its updated T&Cs a few years ago, Signal got millions of new users within a few days due to its stringent approach to data privacy). Also, considering ethical issues at the start helps a lot with compliance, as regulations are merely the foundation upon which an ethical product is built.


If you, as a company, think about risks and issues from the start, it is highly likely that you will be easily compliant with any laws, since they deal with exactly that - risks. If, on top of that, you go beyond the mere compliance and create a product that not only fulfills the basic requirements, but enhances protection of rights and combines that with a good user experience and added value - that would mean lower compliance costs in the future due to the robust ethical framework; enhanced trust in the product; and future-proofing against evolving regulations.


What all this shows is that innovation isn't limited to technology itself—it extends to how we govern and regulate any new and impactful systems like AI. A thoughtful, balanced approach to AI policy will not only protect citizens more effectively but create opportunities for sustainable growth and competitive advantage. For practitioners and policymakers alike, the path forward is clear: ethical considerations must become the foundation of AI development, not an afterthought.


As the AI governance landscape evolves, those organizations that embrace proactive ethical frameworks today will not only avoid regulatory headaches tomorrow but will build more trustworthy, valuable products that stand the test of time. The question we must ask ourselves is not whether we can afford to integrate ethics from the start, but whether we can afford not to.

 
 
 

Comments


bottom of page