Artificial Intelligence Policy - Navigating the Future of AI Governance

Written by Coursera Staff • Updated on

An artificial intelligence policy is key to regulating and guiding the development and deployment of AI technologies. Explore how businesses must learn to balance innovation and ethical considerations in AI governance in the future.

[Featured Image] A professional using a tablet to control an AI robot while being aware of their company's artificial intelligence policy.

Business leaders have adopted artificial intelligence (AI) widely and enthusiastically in recent years. It’s easy enough to see why: AI of all types has the potential to result in massive gains in productivity, innovation, and revenue. So much so that AI may contribute as much as $4.4 trillion a year to the global economy [1]. According to the US Chamber of Commerce, 98 percent of small businesses use AI in one form or another [2]. 

Given the widespread adoption of the technology, it’s also notable that as few as 44 percent of businesses, large or small, have an artificial intelligence policy in place, however [3]. A responsible AI policy, also known as AI governance, relies on the implementation of certain guardrails regarding ethical AI use and how companies ensure their AI models don’t negatively affect internal stakeholders or the general public while still boosting business objectives. 

What is artificial intelligence policy?

AI policy encompasses the regulations, guidelines, and frameworks that govern the use of AI technologies across various sectors. AI policies touch on matters such as: 

  • Credit attribution

  • Employee reviews of AI pre-publication

  • Limits on AI’s access to and retention of personal data

Developing a tenable AI policy requires balancing many different, even competing, factors. No two policies are quite the same, and no single policy can make everyone happy. According to officials at the US Chamber of Commerce, the European Union’s AI policy doesn’t properly address risk management or go far enough in encouraging innovation [4]. 

Some believe that an AI policy that hampers innovation to any extent is worth avoiding. Others disagree and favor a risk-reduction approach. 

Importance of artificial intelligence policy

AI policy is important in various ways, from helping to confront ethical considerations head-on to developing a framework to foster public trust. 

Ethical considerations 

Debate about ethical AI use abounds. It’s possible that, as more organizations embrace AI, it will disrupt the workforce. Occupations AI may to affect include: 

  • Content writer

  • Graphic designer

  • Paralegal

  • Travel adviser

Some say that government regulatory bodies are doing too little to safeguard workers against AI replacement; the result, they worry, will be mass layoffs. Some predict that AI could eliminate as many as 85 million jobs worldwide by 2025 [5]. However, recent data suggests that AI may be disrupting the workplace, but it’s also creating new jobs, with as many as 20 percent of US professionals working in positions that never existed before, including jobs like artificial intelligence engineers [6].

Bias is another topic of concern among AI ethicists. AI output depends on clean, bias-free training data. If those using AI trust it unreflectively, this could harm certain groups of people. 

Public trust 

Many people use AI platforms such as ChatGPT daily. However, only 35 percent of consumers globally trust AI for business applications [7]. 

AI innovation would likely stall in the absence of public trust. For an AI model to seem trustworthy to the general public, it must work within a few key parameters. An AI model must be: 

  • Explainable: You must be able to explain to a general audience why AI results are what they are, what data they trained their AI models on, how accurate their predictive models are, and to what extent their models track training data such that it can be retrieved if needed. 

  • Fair: You must be able to convince people that your AI model is fair in the sense that you can more or less prove you trained it on diverse data sets, that it incorporates bias mitigation software, and that it’s constantly reviewed by human beings for signs of discriminatory output. 

  • Secure: If your AI model is subject to cyber attacks, it will not seem robust enough to place trust in. A poorly secured AI model puts training data in harm’s way. That training data may include a great deal of sensitive consumer information; data leaks never go over well with the public. 

Key components of AI policy

Essential elements of effect AI policies include everything from a strategy regarding data security to an outline regarding transparency. Explore the various components in more detail. 

Data privacy and security 

It may be smart to center your AI policy on data protection and security. Data protection can be tricky: You train AI models on such vast swathes of unstructured data that they, merely as a matter of course, are liable to scoop up just the kind of personal information cyber thieves are after. Yet AI in itself may help combat the growing problem of cyber threats, according to the US Department of Homeland Security [8]. 

Some believe that data security is a secondary priority. AI’s predictive accuracy improves as you broaden its training data sets; limiting that training data too much may represent an unfair kneecapping of AI use, putting more data security-cautious companies at an unfair competitive disadvantage. 

Transparency and explainability

Transparency refers to how understandable—or at least explainable—an individual AI model’s training and output processes are. 

More and more people utilize AI for high-level decision-making purposes every day, including those regarding medical diagnoses, financial advice, and legal guidance. If people are going to, as a rule, use your AI model in such high-stakes ways, it’s only right that they be able to understand where the information they receive comes from.

The Artificial Intelligence Act of the European Union (EU) is the first regulatory policy to focus on AI transparency. Some say it’s insufficient. Some major companies recently publicly committed to safer, more transparent AI policies. These companies include: 

  • Amazon

  • Anthropic

  • Google

  • Inflection

  • Meta

  • Microsoft

  • OpenAI

The point is for these companies to disclose transparency processes in a way intended to foster public trust. They could disclose and explain their AI algorithms, share information on input data sets and training methods, and discuss validation and oversight processes. They could do none of that, or they could do nothing at all.

Fairness and non-discrimination

Intelligent AI adoption and application may allow small businesses to compete with larger ones where they couldn’t before. That’s the theory, at least, and it suggests a path forward in terms of business fairness. 

Should your AI model prove dangerous or discriminatory, it’s important that the relevant people be held accountable. Many government bodies threaten litigation over AI-based discrimination. If found liable, you could be subject to hefty fines. 

CEOs, no matter the size of their company, can reassure the public that their AI policies are fair and non-discriminatory by offering ethical statements regarding AI use and general educational materials on their websites. They can also be public-facing when it comes to speaking about AI ethics rather than dodging the issue when it comes up. 

Who uses artificial intelligence policy?

A variety of regulatory bodies have proposed AI policies. However, some organizations will have to develop their own.

Governments

Government regulatory bodies are focusing more and more on AI. In the US alone, the following departments have made statements regarding ethical AI use: 

  • The US National Science Foundation

  • The National Institute of Standards and Technology

  • The US Department of Homeland Security

The President’s “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” plainly laid out an overall AI policy focused on worker well-being. 

Other government bodies see the need to be more specific: For example, the City of Seattle’s AI policy statement puts the focus on the following governing principles [9]: 

  1. Innovation and sustainability 

  2. Transparency and accountability

  3. Validity and reliability

  4. Bias and harm reduction and fairness

  5. Privacy enhancing

  6. Explainability and interpretability

  7. Security and resiliency

While this highly local approach isn’t uncommon, some argue that piecemeal policymaking will only delay positive change. The development and reinforcement of responsible AI policy, they say, is the province of the federal government alone. 

Businesses

The US Chamber of Commerce-initiated Responsible AI Business Leadership Initiative identifies four key points for the creation of a responsible AI policy [10]: 

  1. Educating the Public and Policymakers 

  2. Advocating for Federal Policies to Achieve Innovation and Trustworthy AI 

  3. Advancing US Leadership in Creating a Global AI Framework 

  4. Preventing a Conflicting Patchwork of State and Local Regulations

Some business leaders develop AI policies almost entirely due to the potential negative consequences of not adhering to government regulations. Others are more reflective; some companies are concerned with bias mitigation, others with data purity, consumer privacy, or ethical flexibility. There isn’t a single right way to do data governance, but an intelligent AI policy balances the improvement of business operations with risk management and adherence to certain moral precepts for its own sake. 

Academia

Savvy AI implementation may ease the burden on teachers by helping them develop their curriculum, granting them more time to teach students in that holistic, humanistic way that a machine algorithm simply can’t replicate. 

However, ethical pitfalls exist in the academic realm, as well. Asymmetric AI adoption in education may lead to educational disparities, particularly if funding for AI adoption and training isn’t available to teachers. Furthermore, teachers are busy; they may not have time to master AI in the classroom amid all their other responsibilities. 

One particularly concerning potential use case for AI is emotion AI, an automated attention and engagement tracker. This type of surveillance technology can incorrectly flag a student for cheating, resulting in undeserved negative consequences for that student. Emotion AI represents a development in automated surveillance that many find invasive, even as its use increases not only in the classroom but also in the workplace. 

Research

AI use has risen in fields such as: 

  • Geology

  • Biology

  • Physics

  • Engineering

  • Psychology

  • Sociology

  • Economics

While AI has the potential to benefit a great number of disciplines, a training gap exists: Some disciplines don't spend enough money training researchers with AI. This means they can’t benefit from it. So, AI benefits in research are unequal. To what extent is this an ethical problem? Some would say AI in the lab isn’t a desirable development: AI may be making people lazy and reducing their capacity for decision-making. 

However, AI technology has been responsible for scientific breakthroughs, such as accurately predicting protein structure. You may also question whether it can do much more than an advanced statistical model can. Can AI, in other words, make new scientific discoveries? 

Best practices for developing AI policy

Flexibility and communication with stakeholders are important when developing an artificial intelligence policy. Actionable recommendations for creating effective AI policies include the following.

Engage stakeholders 

According to Brad Smith, Vice Chair and President of Microsoft, the purpose of AI is innovation, and regulatory caution should never stifle innovation [11]. Other stakeholders, however, might disagree. 

Different types of stakeholders have different concerns about AI. Workers are stakeholders, too. And many of them feel as if mass AI adoption has made them part of a large-scale, high-stakes experiment they never consented to. Some wonder if management or AI developers shouldn't compensate them for their participation in some meaningful way. 

Stakeholders of all types want to see revenue increase—which means upstream increases in productivity and efficiency. AI seemed to promise that more readily in the past than now. Some argue that business leaders jumped at adopting AI too early before anyone had outlined its use cases in a comprehensive way. As business leaders largely failed to outline ethical guidelines for the use of the technology that so enchanted them, many have suffered consequent reputational damage and legal jeopardy. 

Be adaptable

AI transparency may be more important in certain use cases than others—that is, people may demand greater transparency when an AI model deals in higher-stakes output. As use cases change, the amount or type of suasion brought to bear on an individual company vis-a-vis their AI policy may change. 

Disclosures many seek in an AI policy include: 

  • Risk level

  • Model policy

  • Training data

  • Training and testing accuracy

  • Bias

  • Explainability metrics

Arguably, increased transparency risks infringing on intellectual property rights. That is, the way certain AI models function or the proprietary nature of the technology at hand may be business secrets companies can reasonably expect to be hidden from acquisition by competitors. Business leaders will have to be adaptable enough to strike a balance between certain opposing goals—between transparency and privacy, for instance—as public attitudes and government requirements toward AI change over time. 

Evaluate continuously

AI is only as good as the data you train it on, and training is not in itself the same thing as improving. Many times, AI models require human intervention to work well. 

One issue programmers encounter is model drift, which refers to the worsening of an AI model’s performance over time due to, among other things, changes in data inputs. This results in worsening outputs, which can negatively affect the way those who rely on AI for information make important decisions. 

AI requires continuous evaluation, reevaluation, and reworking. So, remain open to collaboration with third-party experts. They may be capable of finding solutions you couldn’t, all for the common purpose of continually improving your respective AI models. You don’t need to develop your AI policy in a vacuum. 

Learn more about AI policy with Coursera

An organization’s artificial intelligence policy inheres to a number of factors, such as relevant use cases and desired metrics. Explore AI governance and ethics, and build in-demand skills with online programs like Generative AI for Everyone: Basics and Applications for All, a beginner-friendly course ideal for learning the fundamentals of AI. Go deeper into the field with the IBM AI Engineering Professional Certificate, a 13-course series that can help you develop job-ready skills. 

Placeholder

course

Career planning: Your career, your life

Are you satisfied with your career? Where do you see yourself in the future? No matter where you are on your professional journey, careful planning will ...

4.8

(239 ratings)

14,322 already enrolled

Average time: 18 hour(s)

Learn at your own pace

Skills you'll build:

Leadership and Management, Planning, Adaptability, Communication, Business Psychology, Human Resources, Organizational Development, Problem Solving, Strategy

Article sources

1

McKinsey. “AI could increase corporate profits by $4.4 trillion a year, according to new research, https://www.mckinsey.com/mgi/overview/in-the-news/ai-could-increase-corporate-profits-by-4-trillion-a-year-according-to-new-research.” Accessed February 20, 2025. 

Updated on
Written by:

Editorial Team

Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.

Advance your career with top-rated exam prep courses today.

Subscribe to earn unlimited certificates and build job-ready skills from top organizations.