In a marked change from when the Clinton administration opened the internet to the public in 1993, Congress and other government officials are taking note of the potential for malfeasance as consumer artificial intelligence makes its mark. What, precisely, will come of this attention?
Every class I teach in the area of information science I begin by instructing students to use Lessig’s four factor analysis. Law, technology, social norms and the market are the topic areas, which are intended to be read very broadly (law can represent government, and social norms is a big sweep of society, etc.). Many people in “tech,” and the media that covers it, would do well to take in that lecture. It is time to stop reducing everything to “tech” and to start thinking more deeply about how these factors interact. It was my first thought when Geoffrey Hinton left Alphabet. “You mean he just now thought about the implications of the market?” When I listened to him on the PBS Newshour, I recognized that he had a more profound understanding. The media presented his departure that reduced him to “tech.”
I have five points.
The key to any programs designed to fend off greater proliferation of mis/disinformation rely on robust education that instill critical thinking skills. My students at Cornell assure me that while they were “taught to the test,” they had lots of opportunities to learn how to think in K-12 … until one student piped up to say, “Yeah, we did, but we went to good schools. I am not sure about those of us who didn’t.” Give that student an A-plus! “We” is not an adequate assessment of the differentials in public education in the United States. And then there is politics. If Mothers for Liberty can get off their trope of repeating “LGBTQ” or “diversity, equity and inclusion” phrases and understand what they are talking about, maybe “we” in the U.S. can get off “culture wars” and renew a commitment to teaching students how to think. I went to Catholic schools, for heaven’s sake, and I was taught to think critically. We can do better than the talking points that the Mercers feed these right-wing organization.
We must get policy steps ahead to address workforce consequences of AI. If the disaffection of millions who suffered the consequences of outsourcing three decades ago has not taught us a lesson—and how a former president has taken advantage of the emotional qualities that accompany that dislocation—then I don’t know what else will inform us to plan for projected disruptions in the workforce projected for AI.
Execute those plans now. Create apprenticeships, encourage appropriate coursework in community colleges, institute new AI majors that cover the politics, economics and social factors that frame it. I could go on, but I think you get my point.
Technology Is Not the Problem; It Is the Information That It Creates
Herein lies the most important issue. The U.S. does not have a framework to understand and evaluate information. In a global information economy, this lapse is a mistake; with the emergence of AI, it is now rising to a critical gap.
No need to hold our breath on constitutional shifts; the Dobbs decision made that point perfectly clear. We are not the E.U., and we will not see terms such as “privacy” in our constitutional filament. But now is the time to refresh our understanding of Helen Nissenbaum’s work in “Privacy in Context.” Smartly, Nissenbaum recognizes that the various applications of “privacy” (intimate relations, government surveillance, consumer applications, etc.) that require tailored treatment united by the verbs that apply to information: flows, access, control, etc. She calls for processes to establish rules appropriate to those specific areas. In plain language, apply fair information practices relative to the value of the information and the social and political needs to manage it.
Into the AI realm, content authenticity must be added. Content authenticity would mark content from trusted sources, which, together with critical thinking skills, could do a lot to address mis/disinformation. Oh, sure, the bad guys will learn to scam it, but most of law enforcement in technology and not least in intellectual property is about trying to keep one little step ahead of the criminals. I mention it now because it is a good start. More than anything, it is an approach we can learn from and create more technologies to counteract malfeasance due to AI. And note: it functions in tandem with Nissenbaum’s work.
Revival of Citizenship and Consumer Rights
I can sum up the theme to both of my classes this semester with one phrase: information inequality. Nowhere in the developed world do we have such a deficit as we do in the U.S. That deficit is what lurks behind our ability to understand and regulate consumer AI. But the attention that AI has brought to the “internet” also presents us with the opportunity to rethink issues from platform responsibility to citizenship and consumer rights in cyberspace. Bipartisan interest in this issue holds promise for more meaningful conversations that could make a difference. Let’s not waste it!