On the Danger of Not Understanding Technology
by Fred Lederer, William & Mary Law School
Blog Post
May 2, 2023
Fred Lederer is the PIT University Network Designee at William & Mary University, where teaches law and directs the Center for Legal and Court Technology.
During the Supreme Court’s February, 2023 arguments in Gonzalez v. Google, an important Internet case that could decide whether tech companies are liable for the material published on their platforms, Justice Elena Kagan commented, “We really don’t know about these things.” Supreme Court justices may be forgiven for lacking a better understanding of how key aspects of the modern world work. After all, they have law clerks and briefs to help them. The same can’t be said for most of the rest of us, and technology we don’t understand can hurt us.
As a law professor and Director of William & Mary Law School’s Center for Legal & Court Technology, I sometimes have a closeup view of how technological ignorance can cause harm. Years ago in our very high-tech experimental McGlothlin Courtroom, we had a lovely small hexagonal wood box on the judge’s bench with a microphone, to transmit audio to those with hearing difficulties. None of our visitors took the time to ask about how it worked.
Expert opinion from the PIT University Network
Counsel in one simulated case was quite surprised when he discovered that it could pick up private conversation with his client and transmit it to everyone in the courtroom wearing the appropriate headphones. We summarily retired the microphone.
Today, we are increasingly dependent upon technology, especially cyber technology, and we take it for granted to an extreme degree. This is not a new problem. How do modern automobiles work? Most of us don’t know. We get in, turn a key or push a button, and the car works. When there’s a problem, we frequently are at the mercy of an auto “mechanic” to diagnose and repair the problem, which often stems from computer hardware or software.
Issues of privacy related to cyber technology are increasingly common. But based on informal surveys of my law school classes, almost no one, or at least no law student, knows how email works. “You just ‘type’ and hit send,” they say. Could the email be monitored by someone in the process of transmission? No one knows. One would imagine that highly intelligent law students who are attuned to the importance of privacy and client confidentiality would be familiar with how they transmit and receive important information.
Ignorance can even result in physical harm. Lithium-ion batteries power many different types of devices, from computers to E-bikes. Some, unpredictably, can burst into flames without warning, especially if overcharged.
Not all of us need to become technical experts to avoid technological harms.
The technological development of the moment is Generative AI, like ChatGPT and its competing programs such as Google’s Bard. These large language models have been trained on enormous amount of data, often limited to given time periods. When asked a question or given instructions from a “prompt,” they predict what a human being would say in response. The results can be extraordinarily impressive. The verdict is out on whether Generative AI will ultimately help humans work better and smarter, or simply put many of us out of work altogether.
What is clear, however, is how often Generative AI chatbots are wrong, sometimes creating non-existent facts or citing non-existent sources. Reports mentioning that ChatGPT was good enough to pass a sample law school examination, for example, don’t always mention how poorly the system did or the degree to which the response depends upon the prompt. Accordingly, generative AI systems can be of great help, but they cannot be relied upon. Using them for a first draft of a document seems highly reasonable, but more poses grave risks. Next academic year, one of my colleagues at William & Mary Law School will teach a course on how to integrate ChatGPT into legal writing. Understanding how the technology works produces a great tool; not understanding it and using it anyways may produce a disaster.
Not all of us need to become technical experts to avoid technological harms; rather, we simply need to apply what we already know. We speak of cybersecurity failures, for example, but human error is usually the fundamental cause of successful hacking, resulting in the unlawful seizure of personal information. Phishing attacks take advantage of most people’s ignorance or willful failure to question suspicious emails designed to tempt them to open seemingly important or interesting email attachments. These breaches of personal or business communications are disturbing, but the risk to our democracy is far greater.
PIT Experts on Generative AI & Higher Education
Companies seek to increase profits and thus need to communicate with potential buyers. Consumers seek information. Sometimes the result is as harmless as anti-aging products and cute animal pictures targeted to your social media feed. But too often the same algorithms that drive this flow of information drive us, as news consumers, into echo chambers that feed hyper-partisanship, conspiracy theories and even extremism.
So, where does this leave those of us who aren't technologists by trade? What is our responsibility as individuals, family members, workers, and as members of or communities and nations? We should borrow from the medical professions and vow to do no wrong via technology.
When using technology we don't really understand, perhaps we should ask ourselves questions such as:
- Given that many people and organizations care only about their own goals, to what extent should we trust claims about what the technology will do and how it will do it?
- If the technology actually does do what it is supposed to, what would be the direct and collateral consequences; what will be the unforeseen effects?
- What data will the technology collect and what will happen to it; will it give a financial or other benefit to others or place us at risk?
Technology can vastly improve our lives. But, if we choose not to question it, for whatever reason, technology and those who control it will rule our lives and not for the better.
The Public Interest Technology University Network (PIT-UN), convened by New America, fosters collaboration between universities and colleges to build the field of public interest technology and nurture a new generation of civic-minded technologists. Learn more and get in touch here.