Demystifying AI: A Primer
Blog Post
Oct. 7, 2024
OTI’s “Demystifying AI” series breaks down what we really mean when we talk about Artificial Intelligence (AI). In order to understand the uses and potential impacts of AI—both generative and predictive—on society and individuals, a critical and holistic overview of what AI can and cannot do is necessary. While much of the public discourse hones in on generative AI in particular, the world needs to pay more attention to how predictive AI is used to predict future outcomes and further automate decision-making. This series provides concrete examples of both the promises and perils of AI so we can move beyond the hype around this technology and ensure that we are responsibly shaping how AI is used—instead of allowing AI to shape us.
The buzz around artificial intelligence (AI) is everywhere. Re-invigorated by ChatGPT’s release in late 2022, conversations surrounding AI paint dramatic pictures of how the tech will revolutionize life as we know it—for better and for worse. Such speculation makes it difficult for users and policymakers to parse the actual benefits and challenges of AI. The truth is that AI is a broad, complex term that encompasses a variety of technologies and applications. What we know as AI has actually been around for decades and powers activity across all sectors of life.
While ChatGPT has brought generative AI to the forefront of the AI governance conversation, not enough attention has been paid to predictive AI, especially when it is used in consequential decision-making. Understanding the potential harms and benefits of different AI applications can better inform safeguards and regulations around the creation, deployment, and governance of AI systems.
What Is AI?
There is no single definition of AI shared across academia, industry, and government. Generally, AI is used as an umbrella term to refer to both a field of study and the machine-based systems that use mathematical models to analyze inputs in order to complete specific tasks, such as making predictions, recommendations, content, and decisions. AI goes beyond traditional data processing, with systems using data and algorithms (sets of rules or instructions) to learn, reason, problem-solve, process language, and perceive their environment—hence why we call these systems “intelligent.”
“Artificial intelligence (AI) means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to: perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information of action.” — National Artificial Intelligence Initiative [15 USC 9401(3)]
How Does AI Work?
AI systems can use machine learning (based on algorithms and statistical models); deep learning (based on complex layers of interconnected computing systems); or a combination of both to accomplish a variety of tasks, including processing data, making predictions, and creating content. Scientists use different training models depending on the intended purpose to “teach” AI systems. Common training models for AI systems include
- supervised learning (where humans help determine outcomes);
- unsupervised learning (where the AI model determines outcomes on its own);
- reinforcement learning (where the AI model learns to complete tasks through trial and error);
- and neural networks (where large and complex data sets are processed through multiple layers of computing nodes).
Types of AI
AI is used for a variety of simple and complex tasks. To better understand AI applications and the potential implications for users—both positive and negative—AI can generally be thought of in two categories:
- Predictive AI: Predictive AI refers to systems that use existing data and human-defined algorithms to find and identify patterns; organize data; and make predictions, inferences, or forecasts about future data. These predictions can consequently be used to make recommendations or decisions.
- Generative AI: Generative AI uses existing data to find and identify patterns and distributions in order to create new content in response to a human prompt. Generative AI can produce original written, visual, and auditory content—or a combination of all three.
The potential for both predictive and generative AI is endless. AI allows for greater learning from existing data and can reduce certain types of administrative and repetitive work, increase productivity, and inform critical decision making for future scenarios. When implemented thoughtfully and with the appropriate safeguards, AI can advance a variety of sectors from labor, education, and healthcare to public administration, finance, and environmental management. In many of these fields, AI is already in use—though closer analysis of the risks and benefits is needed to determine if AI is an appropriate tool for all use cases.
A Closer Look:
AI and Health Insurance
Health insurance, a complicated field known for its administrative burden, has become a prime space for AI automation, with one McKinsey report estimating AI could result in billions of dollars in savings. The use of predictive AI to help process claims and calculate care coverage, however, has resulted in denied care to patients in need. An investigative series by STAT found that Humana, United Healthcare, and various Blue Cross Blue Shield plans used these predictive tools to deny coverage and restrict available care. Similarly, a ProPublica investigation found health insurance giant Cigna used a predictive algorithm to process insurance claims that led to bulk denials of claims without proper medical review. Health insurance companies are now facing class-action lawsuits for their wrongful AI use. Doctors are also fighting back against increased denials of treatment driven by predictive AI, using generative AI to write letters to insurers and appeal claim denials.
Future of AI: Considerations for Industry and Policymakers
For all the potential benefits AI carries, there are also associated risks and harms for users. Programming and data sources, as well as the human and systemic context that shape AI models, can be insufficient and biased, leading to unfair, inaccurate, and discriminatory outcomes. In addition, AI systems and their outcomes are not always clear or explainable, which hampers the ability to ensure the systems are accurate and fair or allow those impacted by the systems to contest the decisions.
To mitigate potential harms while capitalizing on AI’s benefits, more coordinated action is needed to address the challenges of the AI systems already in use and those yet to come. Together, users, policymakers, and industry must grapple with pressing questions about the use of AI, especially predictive AI, including the following:
- Is predictive AI an adequate, appropriate, and necessary tool for addressing the issue in question?
- What structures for human oversight and intervention are needed to mitigate potential harms and ensure AI functions are as intended?
- How can we better vet and test AI systems before they are applied in real-life contexts?
- Is the available training data sufficiently accurate and representative to inform fair AI use?
- What rights and guarantees do users have related to AI systems—specifically regarding data privacy, opting out of AI decision-making, and recourse for AI-driven outcomes?
- How can we improve disclosure and transparency around AI use across fields?