❗️Changes in progress! Thank you for your patience.

Skynet Logo Recreation by Peter Verdone

Stop it: AI Isn’t Coming for Your Job If You Don’t Let It

Ever since the dawn of organized societies, human beings have had an increasingly complicated relationship with the technology they create and ultimately rely on. While on one hand, we appreciate the fact that human beings are capable of creating groundbreaking innovations in every sector of life, on the other, we have always felt a need to manage these innovations and temper any potentiality of adverse effects. In some cases, this has been practical—such in the case of Nuclear technology, and in the modern day, semiconductor manufacturing and quantum computing. In some cases, it’s been impractical, such as believing we can effectively regulate social media the way we regulate TV and telecommunications. In other cases this need is paranoid, such as in the case of cryptocurrency, digital autonomous organizations (DAOs), and AI.

The things that define our need to control technology as practical, impractical, or paranoid have much to do with the intended implementation of controls, and their propensity to yield effective, or positively externalized results. We regulate nuclear technology and semiconductor manufacturing because as much as these technologies are foundational to the modern way of life, they can also be used to completely eviscerate it in one fell swoop. At the same time, the reason why we won’t be able to regulate social media technology and a child’s access to smartphones is that these are issues related to effective parenting, not the products of a private organization. At the end of the day, if a kid wants to be on Facebook, they’ll just lie, say they are the appropriate age, and get on Facebook—and it doesn’t matter if Congress passes a bill banning everyone under the age of 16 from using Social Media platforms. Finally, trying to place strict controls on cryptocurrencies, DAOs, and AI is paranoid because these technologies serve more long-term potential to contribute to human well-being than they have the potential to contribute to human harm.

The part we focus and fixate on, however, is the potentiality of harm: nuclear war unleashed upon the world, critical infrastructure hacked to ribbons, juvenile suicides, government uprisings, neo-feudalism, and more. On the topic of AI, we think of it as one of the four horsemen of the apocalypse—a conjuration of big tech interests that is solely destined to wipe out or enslave the entire human race.

Our belief in this destiny isn’t unfounded either: after all, there have been at least a dozen major film franchises that use human subjugation—or elimination—by an AI antagonist as major narrative crux points. And this image of artificial intelligence has always taken on different shapes, whether that was the Iconic Hal 9000 from 2001: A Space Odyssey or Terminator’s Skynet, all the way to Viki and The Hosts from iRobot, and Westworld respectively.

But how true is all of this? Sure, while the stories of films like iRobot and Terminator feel entirely plausible—and certainly possible by today’s standards—is AI actually something that deserves to be feared? At least, to the levels that it presently is based on what we presently have? While the public is turning its attention to more sophisticated, real-world AI tools like ChatGPT, the conversation has shifted to how these tools will be used to eliminate jobs and ultimately disenfranchise trained professionals. But is this actually a veritable claim? Or do services like ChatGPT show greater potential to make all of our lives—and careers—better instead?

For those who have been living under a rock, Chat GPT is a conversational AI Chat Bot released in November 2022 that is taking the world by storm with its ability to complete open-ended and complex productivity tasks. Its comparative sophistication to pre-existing services such as Google Assistant & Apple’s Siri resides not only in its ability to answer questions with granular detail and human-sounding responses, but also its ability to support granular follow-up prompts, as well as ideate written content, edit and write code, and do just about any other complex task that is text-intensive.

As you might suspect, ChatGPT’s intelligence, accuracy, and speed have been upending both the academic and professional worlds in kind. Stanford students are using it to pass tests. People are using it to create investment bots. Single men are using it to talk to women on Tinder. Marketers are using it to build strategies. The applications feel seemingly endless. With updates such as GPT-4 which increases the platform’s input and coding capabilities, and the scope of existing means to connect Chat GPT to the internet, the platform is ever-increasing in its utility, capability, and feature set—and in a matter of months, has become an essential application in many professional toolsets. Hell, I have even used GPT-4 to generate HTML and CSS code for a website that I currently manage on behalf of a client. In short? Chat GPT is fucking awesome.

This robustness, however, doesn’t come without its own concerns—and a whole heap of them, to be fair. In a world where education, credentialization, and trained talent have allowed professionals to enjoy a monopoly on a wide swath of trained skills, Chat GPT is seen by many as a tool that stands to upend this entire system. What is the value of testing in school if every student has access to the same AI that can perfectly answer all of the same questions? What is the value of trained marketers, writers, and coders if there is a system out there which can do work equally as good—if not better—than the professionals that the average company has access to? What is the value of experience if a fresh indoctrinate can be assisted by a sophisticated program that has access to all of the expert-level knowledge?

The reason why professionals care about their monopoly on professional services is because they operate under the idea that it is the existence of the monopoly itself that provides job security. Thus, anything that can stand to democratize that monopoly and allow for the entry of new perspectives, techniques, and ideas will be seen as a threat. In reality, it is not that services like Chat GPT will replace trained professionals on their own, so much as it is that they will more likely reduce the gap in knowledge and capability between mediocre and star talent.

And to be clear, I don’t know if that is even likely to happen on its own.

It is a common response for people to be fearful in the face of technological change. People were fearful of industrialization. People were fearful of automobiles. People were fearful of suspension bridges, telephones, computers, and later the internet. And now, people are fearful of AI. To be clear, there is an important narrative to all of this: that despite what people thought was economically disruptive or physically unsafe, each of these innovations would ultimately correspond to increased quality of life, and more access to opportunity for people who did not have it before. The end result wasn’t the hyper scarcity of resources, mass unemployment, and economic destitution—in fact, the results were the opposite.

Regarding Chat GPT being the cause of mass unemployment among writers, coders, and marketers, let’s be real for a second: this argument is fallacious on its fucking face. The reason why this argument reeks of stupidity and paranoia is simple: because if AI is, in fact, as powerful and sophisticated as we have illustrated it to be in pop culture, then it would be powerful and sophisticated enough to eliminate every job. Not just the jobs of writers, marketers, coders, or fine artists—but also the job of doctors, lawyers, soldiers, police officers, Tailors, Cooks, Investors, and even politicians. And if it truly is this powerful, then the conversation is no longer one regarding economic destitution, mass unemployment, or resource scarcity. If resources are being appropriately managed—which they would be if those who were managing the resources were AI that effectively replaced their human equivalent—the question becomes what would human beings do if they no longer had to work? Sure, what I am saying might sound naively utopian, but consider for a moment that even in the absence of employment, AI could conceivably achieve the same effects for people that all watershed innovations have—increased democratization of opportunity, and increased individual well-being.

If AI is truly capable of outstripping the role of human beings in the economy, then the solution to living in a world occupied with it, is building AI around the well-being of humans. Whether we like it or not, Chat GPT follows this model, and the way we know is because Chat GPT increases access to information and capability for more people. It just might happen to be that Chat GPT is not built in favor of increasing the well-being of industries and institutions that have traditionally profited by leveraging the qualitative differences in access to education. Additionally, it may also be that individuals such as Elon Musk are billing AI as unsafe not because they actually think it is, but because they do not have a first mover’s advantage in this space.

But here’s the thing: Even if this hyper-futuristic version of the word is technologically possible, I would endeavor to say that the sociological needs of human beings are what will keep this fantasy from becoming a reality. The question people need to ask is not whether automation can replace specific jobs—especially when there is plenty of evidence to suggest that it can replace all of them. Instead, the question is whether or not human beings want automation to replace specific jobs—and not because automation would replace a job that they hold, but because they don’t want to deal with a fucking computer when it comes to receiving certain services or solving certain problems. There are plenty of instances where people don’t want to deal with computers when planning certain initiatives or ideating certain products—so I have a hard time believing that we could just blanket all of our social and economic roles with AI and expect for all of them to be “doing good work.”

The truth of the matter is that for as many paragraphs of factually correct literature ChatGPT can write, it all more or less carries the same literary voice. If you get ChatGPT to write you a poem, no matter the topic it almost always follows the same rhyme structure. If you want ChatGPT to suggest content topics, many of them will be fairly surface and basic, and if you enter a prompt with too many characters in it, the system will crash.

Despite the sophistication of ChatGPT, it still has clear and easy-to-identify limitations. That statement isn’t intended to diminish the powerful nature of the product at hand, so much as it is to draw this conversation back into reality: At the end of the day, ChatGPT is a tool—and one that is only as effective as the individual using it. I will go a step further and say that it is not even a replacement for talent. For as much as ChatGPT can be taught and programmed to copy the work of the best writers, coders, and marketers, one thing that ChatGPT will not do is create something that is truly inspired and new on its own. That requires a human operator—no matter what part of the production stack they wish to occupy.

So that brings us back to what employers, teachers, and sovereign individuals should do in the face of this technology. If ChatGPT is a tool, then like the internet, laptops, and calculators before them, this tool should be taught and enthusiastically learned. Ultimately, teachers will have to evolve in their processes to create merits and standards that can stand true in the face of innovation, and though that is a tall order to ask, it is one that people can meet.

At the same time, employers need to understand that for all of the robustness that it offers, ChatGPT is not a replacement for the experts on their roster. While ChatGPT can do certain things faster, the quality of its outputs are only as good as its inputs, which means that employers still need people who know what they are doing to truly leverage the power of the platform. ChatGPT on its own will not replace the input of an expert-level writer, marketer, or coder. What it will do, however, is increase the operational footprint of these professionals by significantly accelerating certain processes in a workflow. As it stands, I save time by SEO editing copy that I write with Chat GPT. Same thing with copy-editing for grammar and punctuation. The point is that I do not expect ChatGPT to write better than me, but I do expect it to refine the quality of the product that I produce—and do things that help me produce it faster than before. To me, that is a benefit.

I also think this applies to every industry, not just marketing and coding, but also investing, sales, cooking, and even art. The challenge for most people is working with the system and not against the system. Artists are currently in an epic battle with AI companies because algorithms are proving very capable at copying popular art styles. This is especially the case for animators and illustrators for popular cartoons and comic books—and the concern is that if AI can produce images just like the artist, then companies can simply use the AI to create more art, and cut the artist out of the equation altogether.

The error of this judgment, however, is that the artist reduces themselves to the style of the artwork that they create—and not the ideas in their head. Based on the former style of thinking, the artist is in fact, the middle man waiting to have their job replaced. But for all of the sophistication of AI, it is heavily inspired by the creations of real people. Put another way, AI in its current incarnation does not have its own ideas. It relies on the ideas of people. It would not be able to render the styles and images of popular artists if the AI was not trained in those styles—and that’s a fact. But at the same time, if artists in these spaces can take a moment to stop feeling threatened, they might realize that the very algorithms they are trying to fight could very well increase their productivity by over 1000%. Imagine the time it would take for a comic book artist to release their next edition if they had the help of AI to put together certain image frames in their style? Imagine how much faster a small team of animators could put together a TV series if they had the help of an AI algorithm to create the in-between frames that produce on-screen action? The fact of the matter is that for as much fear as artists have over AI, it actually poses the potential to help artists produce more work, and at the same time, be more independent.

Granted, for all of my optimism about artificial intelligence—and its ability to redefine the workspace—we could still fuck up this version of the future. Just as much as we can fuck it up with non-existent regulation, zero controls, and a user experience that prioritizes profits over people, we can also fuck it up with too much resistance, stifling rules, and a crippling inability to see past the current moment. It’s not that AI’s development should be stopped, halted, or “paused” in the words of many tech leaders. It’s that AI’s development needs to be managed on a very fine line—one that is effectively as wide as a knife’s edge—and one that current tech leaders have systemically failed at walking. I think this is also why companies such as OpenAI are so critically important to defining our future. Perhaps in the hands of an Apple, Google, Facebook, or other major tech company, ChatGPT wouldn’t have the guardrails that currently define it—as we have seen with experimental AI bots from companies such as Microsoft—which is ironically a major ChatGPT investor.

In its current state, the guardrails put in place by OpenAi allow for the GPT algorithm to work as a tool in service of humans, and I think that is a good thing. While yes, productivity-oriented artificial intelligence is a workplace game changer, and certainly disruptive in its own right, its entry into the professional workspace is hardly tantamount to a career apocalypse. In the end, while this may result in new types of professionals entering existing industries—as well as the creation of new industries in their own right—I am convinced that all of the flag-waving we are currently seeing about AI will still end with the same professionals occupying the same spaces. The only thing that changes is what people expect out of those roles. At this point, however, the cat is out of the bag regardless of how anyone feels. In the end, regardless of how you might feel about AI—if you are legitimately concerned about its current ability to replace your current line of employment it may help to step back for a second and consider the ways in which you can use AI to secure that very same employment in kind… And if today’s AI does in fact replace your job, the hard reality you might have to accept is that your role probably wasn’t as important to operations as you originally thought—or you probably weren’t that good at the job to begin with.

No Comments

Post a Comment

Categories