The Wild West of Chat/AI

Image created by the AI Art Generator. See explanation below.

Rampant fear or unquestioning enthusiasm. These seem to be the two fundamental stances people are taking toward the growth of Generative AI models. Programs like Chat GPT, Bloom, Replika, and others are demonstrating the power, potential, and problems associated with having technology that seems to talk. Aside from getting some definitional clarity around what Chat/AI is and is not, I present here a couple of Use Cases. They may inspire in you, as they have in me, both fear and enthusiasm.

Let’s Call It What It Is: Simulated Talking

A few definitions might be in order. First of all, Artificial Intelligence (AI) is a large field. AI and Generative AI are different things. AI models are ones where you feed in data and generate recommendations and predictions. We use algorithms ourselves, checking the weather by looking outside; that’s an unsophisticated algorithm that isn’t terribly predictive. In the past, computer models were a lot more limited. They broke down fairly quickly if the variables got complicated or the model tried to look too far in the future. I can guess the weather in an hour, but what about next Sunday at 11 am, when I want to play pickleball? The idea of AI is that the complexity increases and the predictive accuracy increases beyond what computers “used” to predict. A self-driving car might be an example of AI. It’s not creating text or art, but it needs sophisticated decision-making capabilities in order to navigate a very complex environment.

Generative AI, which I’m going to call Chat/AI here, is a model that can create “new” content as part of its predictive output. You feed it tons of examples, and it creates something “new” that is, predictions that are pretty complex, based on previous patterns that made sense. The following example came from Prof. Louis Hyman, who I shall discuss more in a minute. Suppose you asked it to fill in the cat sat on the ____. The model might suggest floor, chair, lap, or mat. But mat might be the highest likelihood, perhaps 50%, so the generative AI picks that word. Overall, over sentences and paragraphs, the Chat/AI “generates” talking … or painting or music… by matching conversations, artwork, or music that has existed before. For our purposes, it’s “new,” but it’s resulting from a set of predictions. If you think about it, “modern” art allows the AI a LOT of leeway. Modern music not AS much–random sounds aren’t as pleasing as random colors. And random text is rarely pleasing, although we use Mad Libs to simulate that randomness.

Generative AI for text relies on what is called LLM, the Large Language Model. These are models built around reviewing conversation and language. If you think about it, there are several levels. One is the characters and spelling in your language. A second is ideas that make sense to human. I’m drinking milk is different in languages, but generally languages have subjects and verbs.

Furthermore, there a lot of companies building LLMs, AIs, generative AIs, and so on. Chat GPT is a program run by a company called Open AI. There are others. If you look at this diagram as of 2020, there were only a few bubbles on it. Now there are a lot. At the moment, technology companies are going big into all of the above words. What I’m talking about here is a few Use Cases for Generative AI built on LLMs, that is how to use programs that can simulate conversations and also gather information through algorithms. Another way to put it is how to get a program to do you programming for you without you learning programming?

Writers are very concerned that these models will both replace them and steal their work. Both are already happening. Historians are concerned, too, because the Chat/AIs are notoriously filled with lies. They make things up, which is ok for presidential candidates and social media, but not for historians. Yet the “hallucinating” of the bots does seem to get them closer to being human!

The Eager But Unreliable Research Assistant

So where’s the good news? There might be a little, even if the Chat/AIs aren’t entirely trustworthy. I was at a history conference last week, where Professor Louis Hyman of Cornell presented a couple of times on how historians can legitimately use these tools to further their research. The panels were a little skeptical and pushed back, which is something that Chat/AIR might be able to do if you told it to do so. But not as well as the dialogue in the room.

Hyman has been working for a while in the area called sometimes “the productivity paradox,” which is why doesn’t digitizing everything in the workplace solve all problems. Why doesn’t putting all accounting numbers into a giant database create financial reports in and of itself? Why doesn’t having all the information on the internet give us answers to all questions? The paradox suggests that with new tools, you have to new ways of thinking. You can’t just automate what you had. A car is not just a robot horse. Having digital data doesn’t replicate all the human creativity needed to create REAL new content.

Prof. Louis Hyman + students, photo from DataSociety.net

Hyman says that Chat/AI is like a really eager but unreliable intern. It’s ok, as long as you tell it exactly what to do. A healthy dose of skepticism is always necessary, checking the interns work. He had a photo generated by his Chat of an intern. I thought I’d ask a computer to provide me a picture, so I asked for a picture of an “overeager, unreliable intern.” That’s where the top photo comes from. Notice that I did NOT ask for a picture of a chat bot or any kind of computer. The AI Art generator gave it three arms. Healthy Dose of Skepticism Required AT All Times.

But here’s what Hyman and others said you could do. This new technology can be used to recognize handwritten letters, and through several passes, may be able to digitize correspondence in archives that has never seen the light of day. It might arguably have flaws in it, but the other option is that no one sees it. A third pass can look across thousands of letters to start to spot patterns. One researcher showed how he used AI to understand the impact on trade of a 1631 plague in Florence; all the archive data was hand-written.

Chat/AI can also be told to write code to answer questions. This reminds me of using spreadsheets to answer big questions. Spreadsheets weren’t about just replacing adding up long columns of numbers; they were able to create combinations and graphics that weren’t available when you did calculations by hand. Accountants weren’t put out of work; they had to work differently.

The AI in the Mirror, Not an Entirely Pretty Picture

So those are Use Cases that are promising, and I don’t have to tell you it is the end of humanity as we know it. (We are not Homo Gestalt. Yet. That’s an In joke.)

But here’s another example that’s pretty scary. There’s an app called Replika that was put on the market back in 2017. The developer pushes it as as “a friend.” Replika estimates that it now has about 2 million users, of whom 250,000 pay for parts of the app.

The Replika app (picture from Wikipedia)

But there are troubling features. People treat this Chat/AI as human. It is, certainly, “real.” Users are devoted to their companion. Last year, Italy banned Replika because (1) it was creating mental instability in vulnerable users and (2) it was engaging in sexually explicit conversation with minors. There’s an NSFW (sexually explicit conversation) setting that could be toggled on. Like other internet things, the company didn’t have a way of proving who was using it.

In order to stay in Italy and other places, the company turned the NSFW off. The users had a fit. They wanted the NSFW. They claimed it was a breach of contract for Replika to turn it off. The compromise is that Replika allowed NSFW for users who had been verified with the program as of an earlier date.

Imagine this capability put into the body of a childlike robot, and you get every robot/android dystopian movie you’ve ever seen. It probably already exists somewhere. What’s frightening is not this technology as much as what we as humans want to do with it.

Consider that 71% of the users of Replika are men. That’s not just an interesting statistic, but a revealing one. If you google “images of Replika,” you get all women.

Google Image Search: Replika revealed only women’s images.

Suddenly, this all seems troubling on several levels. Is it 71% men because men are technology-driven more than women? Because men don’t feel that they can find women who have conversations with them? Remember this is a program that generates conversational text.

What Replika says to me is that technology sometimes shines spotlights on the dark corners in ways that don’t flatter our society.

Also, as it happens, Replika appears to be based out of Russia, though it claimed at one point to be US-based. And it appears to be very hackable and doesn’t protect its data or passwords. So I would think twice about looking for a companion in those quarters.

Plus, the three-arms thing.

Benjamin Banneker, First Black American Intellectual: Part 2, Benjamin’s Abolitionist Almanac

Herein shall we continue the story of Benjamin Banneker, surveyor, farmer, astronomer, polymath, and noted abolitionist. Be sure to read Part One, the history of Banneker’s family and his acquisition of mathematical knowledge.

Benjamin Banneker was nearly sixty when he hit upon the idea of publishing an almanac of natural information. As a farmer, he had kept copious notes, documenting the practices of bees and noting the 17-year cycle of cicadas. Unmarried, he worked his land mostly alone, though he still chatted with his neighbor, George Ellicott. One day, Ellicott brought over a telescope. It turned Banneker’s last two decades into a whirlwind of calculation, publication, and provocation. It would make him famous again for a brief time. He would also poke the hornet’s nest.

“Do you have an answer, Ben?” the schoolmaster’s voice barked out. Startled, Ben looked up and scanned the class, faces turned to stare and giggle. “What is 23 by 7?” Without any calculation, Ben replied, “14 in the tens place and 21 which is 161.” Still, he had not been paying attention. The master picked up the book that had absorbed his young pupil, Newton’s Principia. “I’m sorry, sir,” Ben said. “I forgot to ask if I could…” The master squinted but tried to suppress a grin. “Practicing your Latin?” “Yes, sir. Perhaps you could explain this part … ‘precession of the equinoxes…'”

Alone with a Telescope

In 1788, Benjamin at 57 had continued to eke out a small harvest of apples and wheat, even as the Ellicott Mills and other larger farms had grown around him. His minor celebrity status as a clock maker had died down a bit, although the clock still kept time and the occasional passerby poked his head in to gawk. The Revolution had come and gone. The War had come and gone, too.

Continue reading “Benjamin Banneker, First Black American Intellectual: Part 2, Benjamin’s Abolitionist Almanac”

Benjamin Banneker, First Black American Intellectual: Part 1, Measuring the Past

The box was heavy, both because the man inside was large and because his passing made his bearers heavy of heart. Old Benjamin was a good neighbor, always one to help and share advice. He gave to everybody, though most of those standing around the muddy grave today were dark-skinned as he was. A good man and a religious one–he loved his Bible, as the preacher noted. “A little too much,” thought 12-year-old Elijah, sighing to hear yet another homily from the Old Testament. He scratched another circle in the mud with his toe, as Ben had taught him, a line equidistant around a center point. His eye wandered again over the tops of the trees in the gray October morning, watching the weak sun trying to peer through the clouds. Or, was that a glow? Then, he smelled the smoke.

Banneker’s statue at the Smithsonian Museum of African American History, photo by Frank Schulenberg.

Benjamin Banneker (1731-1806) was a mathematical genius, a polymath some would say, who taught himself astronomy and trigonometry and put them to work on his behalf. He was a surveyor who provided data for the layout of Washington D.C. He was a farmer who understood crop rotations and season fluctuations. He published six years of almanacs which were widely distributed across the mid-Atlantic states. He built his own clock simply from looking at the parts of a borrowed watch. And Benjamin Banneker was Black. He told Thomas Jefferson where to get off; Jefferson, apparently, didn’t like it.

Banneker’s story is so remarkable–so American in its expression of the pioneering spirit and search for freedom–that it’s going to take two posts to tell it. The more I started peeling the onion, the more there was to find. His family story is fascinating in its own right. There is also a mythology that has cropped up around him, where exaggerations have obscured the truth, and created a backwash of clarifications and reductions.

Then, there is the funeral. On the day he was buried, Banneker’s cabin with all his belongings was burned to the ground. Hard enough, for an intellectual Black man in 1790 to gain celebrity for his activities. Much harder, if most of the evidence is destroyed.

Continue reading “Benjamin Banneker, First Black American Intellectual: Part 1, Measuring the Past”