An artificial intelligence policy is key to regulating and guiding the development and deployment of AI technologies. Explore how businesses must learn to balance innovation and ethical considerations in AI governance in the future.
Business leaders have adopted artificial intelligence (AI) widely and enthusiastically in recent years. It’s easy enough to see why: AI of all types has the potential to result in massive gains in productivity, innovation, and revenue. So much so that AI may contribute as much as $4.4 trillion a year to the global economy [1]. According to the US Chamber of Commerce, 98 percent of small businesses use AI in one form or another [2].
Given the widespread adoption of the technology, it’s also notable that as few as 44 percent of businesses, large or small, have an artificial intelligence policy in place, however [3]. A responsible AI policy, also known as AI governance, relies on the implementation of certain guardrails regarding ethical AI use and how companies ensure their AI models don’t negatively affect internal stakeholders or the general public while still boosting business objectives.
AI policy encompasses the regulations, guidelines, and frameworks that govern the use of AI technologies across various sectors. AI policies touch on matters such as:
Credit attribution
Employee reviews of AI pre-publication
Limits on AI’s access to and retention of personal data
Developing a tenable AI policy requires balancing many different, even competing, factors. No two policies are quite the same, and no single policy can make everyone happy. According to officials at the US Chamber of Commerce, the European Union’s AI policy doesn’t properly address risk management or go far enough in encouraging innovation [4].
Some believe that an AI policy that hampers innovation to any extent is worth avoiding. Others disagree and favor a risk-reduction approach.
AI policy is important in various ways, from helping to confront ethical considerations head-on to developing a framework to foster public trust.
Debate about ethical AI use abounds. It’s possible that, as more organizations embrace AI, it will disrupt the workforce. Occupations AI may to affect include:
Content writer
Graphic designer
Paralegal
Travel adviser
Some say that government regulatory bodies are doing too little to safeguard workers against AI replacement; the result, they worry, will be mass layoffs. Some predict that AI could eliminate as many as 85 million jobs worldwide by 2025 [5]. However, recent data suggests that AI may be disrupting the workplace, but it’s also creating new jobs, with as many as 20 percent of US professionals working in positions that never existed before, including jobs like artificial intelligence engineers [6].
Bias is another topic of concern among AI ethicists. AI output depends on clean, bias-free training data. If those using AI trust it unreflectively, this could harm certain groups of people.
Many people use AI platforms such as ChatGPT daily. However, only 35 percent of consumers globally trust AI for business applications [7].
AI innovation would likely stall in the absence of public trust. For an AI model to seem trustworthy to the general public, it must work within a few key parameters. An AI model must be:
Explainable: You must be able to explain to a general audience why AI results are what they are, what data they trained their AI models on, how accurate their predictive models are, and to what extent their models track training data such that it can be retrieved if needed.
Fair: You must be able to convince people that your AI model is fair in the sense that you can more or less prove you trained it on diverse data sets, that it incorporates bias mitigation software, and that it’s constantly reviewed by human beings for signs of discriminatory output.
Secure: If your AI model is subject to cyber attacks, it will not seem robust enough to place trust in. A poorly secured AI model puts training data in harm’s way. That training data may include a great deal of sensitive consumer information; data leaks never go over well with the public.
Essential elements of effect AI policies include everything from a strategy regarding data security to an outline regarding transparency. Explore the various components in more detail.
It may be smart to center your AI policy on data protection and security. Data protection can be tricky: You train AI models on such vast swathes of unstructured data that they, merely as a matter of course, are liable to scoop up just the kind of personal information cyber thieves are after. Yet AI in itself may help combat the growing problem of cyber threats, according to the US Department of Homeland Security [8].
Some believe that data security is a secondary priority. AI’s predictive accuracy improves as you broaden its training data sets; limiting that training data too much may represent an unfair kneecapping of AI use, putting more data security-cautious companies at an unfair competitive disadvantage.
Transparency refers to how understandable—or at least explainable—an individual AI model’s training and output processes are.
More and more people utilize AI for high-level decision-making purposes every day, including those regarding medical diagnoses, financial advice, and legal guidance. If people are going to, as a rule, use your AI model in such high-stakes ways, it’s only right that they be able to understand where the information they receive comes from.
The Artificial Intelligence Act of the European Union (EU) is the first regulatory policy to focus on AI transparency. Some say it’s insufficient. Some major companies recently publicly committed to safer, more transparent AI policies. These companies include:
Amazon
Anthropic
Inflection
Meta
Microsoft
OpenAI
The point is for these companies to disclose transparency processes in a way intended to foster public trust. They could disclose and explain their AI algorithms, share information on input data sets and training methods, and discuss validation and oversight processes. They could do none of that, or they could do nothing at all.
Intelligent AI adoption and application may allow small businesses to compete with larger ones where they couldn’t before. That’s the theory, at least, and it suggests a path forward in terms of business fairness.
Should your AI model prove dangerous or discriminatory, it’s important that the relevant people be held accountable. Many government bodies threaten litigation over AI-based discrimination. If found liable, you could be subject to hefty fines.
CEOs, no matter the size of their company, can reassure the public that their AI policies are fair and non-discriminatory by offering ethical statements regarding AI use and general educational materials on their websites. They can also be public-facing when it comes to speaking about AI ethics rather than dodging the issue when it comes up.
A variety of regulatory bodies have proposed AI policies. However, some organizations will have to develop their own.
Government regulatory bodies are focusing more and more on AI. In the US alone, the following departments have made statements regarding ethical AI use:
The US National Science Foundation
The National Institute of Standards and Technology
The US Department of Homeland Security
The President’s “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” plainly laid out an overall AI policy focused on worker well-being.
Other government bodies see the need to be more specific: For example, the City of Seattle’s AI policy statement puts the focus on the following governing principles [9]:
Innovation and sustainability
Transparency and accountability
Validity and reliability
Bias and harm reduction and fairness
Privacy enhancing
Explainability and interpretability
Security and resiliency
While this highly local approach isn’t uncommon, some argue that piecemeal policymaking will only delay positive change. The development and reinforcement of responsible AI policy, they say, is the province of the federal government alone.
The US Chamber of Commerce-initiated Responsible AI Business Leadership Initiative identifies four key points for the creation of a responsible AI policy [10]:
Educating the Public and Policymakers
Advocating for Federal Policies to Achieve Innovation and Trustworthy AI
Advancing US Leadership in Creating a Global AI Framework
Preventing a Conflicting Patchwork of State and Local Regulations
Some business leaders develop AI policies almost entirely due to the potential negative consequences of not adhering to government regulations. Others are more reflective; some companies are concerned with bias mitigation, others with data purity, consumer privacy, or ethical flexibility. There isn’t a single right way to do data governance, but an intelligent AI policy balances the improvement of business operations with risk management and adherence to certain moral precepts for its own sake.
Savvy AI implementation may ease the burden on teachers by helping them develop their curriculum, granting them more time to teach students in that holistic, humanistic way that a machine algorithm simply can’t replicate.
However, ethical pitfalls exist in the academic realm, as well. Asymmetric AI adoption in education may lead to educational disparities, particularly if funding for AI adoption and training isn’t available to teachers. Furthermore, teachers are busy; they may not have time to master AI in the classroom amid all their other responsibilities.
One particularly concerning potential use case for AI is emotion AI, an automated attention and engagement tracker. This type of surveillance technology can incorrectly flag a student for cheating, resulting in undeserved negative consequences for that student. Emotion AI represents a development in automated surveillance that many find invasive, even as its use increases not only in the classroom but also in the workplace.
AI use has risen in fields such as:
Geology
Biology
Physics
Engineering
Psychology
Sociology
Economics
While AI has the potential to benefit a great number of disciplines, a training gap exists: Some disciplines don't spend enough money training researchers with AI. This means they can’t benefit from it. So, AI benefits in research are unequal. To what extent is this an ethical problem? Some would say AI in the lab isn’t a desirable development: AI may be making people lazy and reducing their capacity for decision-making.
However, AI technology has been responsible for scientific breakthroughs, such as accurately predicting protein structure. You may also question whether it can do much more than an advanced statistical model can. Can AI, in other words, make new scientific discoveries?
Flexibility and communication with stakeholders are important when developing an artificial intelligence policy. Actionable recommendations for creating effective AI policies include the following.
According to Brad Smith, Vice Chair and President of Microsoft, the purpose of AI is innovation, and regulatory caution should never stifle innovation [11]. Other stakeholders, however, might disagree.
Different types of stakeholders have different concerns about AI. Workers are stakeholders, too. And many of them feel as if mass AI adoption has made them part of a large-scale, high-stakes experiment they never consented to. Some wonder if management or AI developers shouldn't compensate them for their participation in some meaningful way.
Stakeholders of all types want to see revenue increase—which means upstream increases in productivity and efficiency. AI seemed to promise that more readily in the past than now. Some argue that business leaders jumped at adopting AI too early before anyone had outlined its use cases in a comprehensive way. As business leaders largely failed to outline ethical guidelines for the use of the technology that so enchanted them, many have suffered consequent reputational damage and legal jeopardy.
AI transparency may be more important in certain use cases than others—that is, people may demand greater transparency when an AI model deals in higher-stakes output. As use cases change, the amount or type of suasion brought to bear on an individual company vis-a-vis their AI policy may change.
Disclosures many seek in an AI policy include:
Risk level
Model policy
Training data
Training and testing accuracy
Bias
Explainability metrics
Arguably, increased transparency risks infringing on intellectual property rights. That is, the way certain AI models function or the proprietary nature of the technology at hand may be business secrets companies can reasonably expect to be hidden from acquisition by competitors. Business leaders will have to be adaptable enough to strike a balance between certain opposing goals—between transparency and privacy, for instance—as public attitudes and government requirements toward AI change over time.
AI is only as good as the data you train it on, and training is not in itself the same thing as improving. Many times, AI models require human intervention to work well.
One issue programmers encounter is model drift, which refers to the worsening of an AI model’s performance over time due to, among other things, changes in data inputs. This results in worsening outputs, which can negatively affect the way those who rely on AI for information make important decisions.
AI requires continuous evaluation, reevaluation, and reworking. So, remain open to collaboration with third-party experts. They may be capable of finding solutions you couldn’t, all for the common purpose of continually improving your respective AI models. You don’t need to develop your AI policy in a vacuum.
An organization’s artificial intelligence policy inheres to a number of factors, such as relevant use cases and desired metrics. Explore AI governance and ethics, and build in-demand skills with online programs like Generative AI for Everyone: Basics and Applications for All, a beginner-friendly course ideal for learning the fundamentals of AI. Go deeper into the field with the IBM AI Engineering Professional Certificate, a 13-course series that can help you develop job-ready skills.
course
Are you satisfied with your career? Where do you see yourself in the future? No matter where you are on your professional journey, careful planning will ...
4.8
(239 ratings)
14,322 already enrolled
Average time: 18 hour(s)
Learn at your own pace
Skills you'll build:
Leadership and Management, Planning, Adaptability, Communication, Business Psychology, Human Resources, Organizational Development, Problem Solving, Strategy
McKinsey. “AI could increase corporate profits by $4.4 trillion a year, according to new research, https://www.mckinsey.com/mgi/overview/in-the-news/ai-could-increase-corporate-profits-by-4-trillion-a-year-according-to-new-research.” Accessed February 20, 2025.
US Chamber of Commerce. “New Study Reveals Nearly All U.S. Small Businesses Leverage AI-Enabled Tools, Warns Proposed Regulations Could Hinder Growth, https://www.uschamber.com/technology/artificial-intelligence/new-study-reveals-nearly-all-u-s-small-businesses-leverage-ai-enabled-tools-warns-proposed-regulations-could-hinder-growth.” Accessed February 20, 2025.
Littler. “2024 AI C-Suite Survey Report: Balancing Risk and Opportunity in AI Decision-Making, https://www.littler.com/files/2024_littler_ai_csuite_survey_report.pdf.” Accessed February 20, 2025.
US Chamber of Commerce. “Future of AI: EU AI Act Fails to Strike Sensible Balance, https://www.uschamber.com/technology/artificial-intelligence/future-of-ai-latest-updates.” Accessed February 20, 2025.
Michigan Journal of Economics. “Is AI taking over the job market?, https://sites.lsa.umich.edu/mje/2024/01/03/is-ai-taking-over-the-job-market.” Accessed February 20, 2025.
LinkedIn. “Work Change Report: AI is Coming to Work, https://economicgraph.linkedin.com/content/dam/me/economicgraph/en-us/PDF/Work-Change-Report.pdf.” Accessed February 20, 2025.
IBM. “What is responsible AI?, https://www.ibm.com/topics/responsible-ai.” Accessed February 20, 2025.
US Department of Homeland Security. “Acquisition and Use of Artificial Intelligence and Machine Learning Technologies by OHS Components, https://www.dhs.gov/sites/default/files/2023-09/23_0913_mgmt_139-06-acquistion-use-ai-technologies-dhs-components.pdf.” Accessed February 20, 2025.
Seattle Government. “City of Seattle Releases Generative Artificial Intelligence Policy Defining Responsible Use for City Employees, https://harrell.seattle.gov/2023/11/03/city-of-seattle-releases-generative-artificial-intelligence-policy-defining-responsible-use-for-city-employees.” Accessed February 20, 2025.
US Chamber of Commerce. “U.S. Chamber Launches ‘Responsible AI Business Leadership Initiative,’ An Education and Advocacy Effort to Promote the Responsible Development and Use of Artificial Intelligence, https://www.uschamber.com/technology/artificial-intelligence/u-s-chamber-launches-responsible-ai-business-leadership-initiative-an-education-and-advocacy-effort-to-promote-the-responsible-development-and-use-of-artificial-intelligence.” Accessed February 20, 2025.
US Chamber of Commerce. “Microsoft President: Responsible AI Development Can Drive Innovation, https://www.uschamber.com/on-demand/economy/microsoft-president-responsible-ai-development-can-drive-innovation.” Accessed February 20, 2025.
Editorial Team
Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...
This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.
Advance your career with top-rated exam prep courses today.
Subscribe to earn unlimited certificates and build job-ready skills from top organizations.