logo
    Feb 26, 2024

    Art is important, and irreplaceable.

    OpenAI, unfortunately, doesn't think so.

    A few days ago, I posted a tweet that went viral (on this weird website that I'm told was formerly known as "Twitter". Hmm, dunno ... ) which generated quite a few interesting conversations between some of my friends.

    Just so that everyone's on the same page before we get into the weeds, here's the gist of it:

    photo_2024-02-26 22.40.44.jpeg

    Strong, emotive words that could only come from a helpless amount of pain from loss. This coming from a former pet owner, as well.

    There's a serious conversation to be had about generative AI.

    And I say this as someone whose future career pretty much hinges on how well this industry will be doing.

    In recent years, I've had friends and family ask me about whether AI will bring doom and gloom to our lives, and I've always been more practical in my responses. I've always maintained the fact that there is a natural, sentimental negative response towards the rise of AI, and there's no need to panic. There's no evil superintelligence out there, at least not in the forseeable future.

    In hindsight, I probably should have had less Terminator and more Margin Call in mind. ⁠ ⁠AI isn't inherently evil. They're not created just to steal your jobs. A gross oversimplification would be that it is inherently just a complicated mathematical function that is programmed to be modified automatically in a random way. Every time it gets something wrong, it gets a whack over the head and a telling off. Every time it gets something right, it's given a virtual cookie. It's not an evil robot with a gun. It's good old math. Machine learning has been around way before OpenAI and ChatGPT took all the spotlight. Researchers in mathematical, scientific and humanities fields have long been using models to spot patterns humans could never be capable of spotting. Last year was the first time we've seen AI and ML pop into the public space, in all its glory, and start rambling about how it's sorry, it's not allowed to do your homework. There shouldn't be an irrational fear of AI, is my main point.

    Business people, however, are another story. We've demonstrated time and time again that corporations cannot be trusted to do the greater good. Or to exercise caution in anything. Just ask Bear Sterns and Lehman Brothers. Just ask FTX. Just ask... ah, you get the idea.

    The danger AI brings was never about its inherent capabilities, and was always, always about its creators. We've so far relied on the assumption that the experts of AI would understand its dangers and exercise caution over its development. We've assumed that, given the immense potential danger AI may bring, there will definitely be legislation before anything get's out of hand. At least, I was convinced we'd know better.

    Part of that reason is OpenAI. Or rather, what it was and represented before ChatGPT became all the craze -- and its flagship product.

    image.png

    OpenAI started off as a non-profit organisation in 2015, but organisational structures and company aims became complicated after its decision to pivot to a "capped profit organisation" in 2019.

    OpenAI did not have humble beginnings. In 2015, OpenAI began with the support of AWS, Infosys, YC Research and the then-sane Elon Musk. One year into its operations, it has managed to attract top researchers worldwide into its fold, with many citing a clear organisational direction that aligned with many of these experts to be the main pull factor. To develop "safe and beneficial" Artificial General Intelligence, was the main mission of OpenAI.

    Then, of course, came ChatGPT, among many different product offerings they have today. In 2019, OpenAI pivoted to a business model that actually didn't yet exist: a "capped" for-profit organisation, where it would cap its profits at 100x the amount of investment it has received at any given time.

    This was, as expected, a paradigm shift for the organisation. Previously, leaders of the organisation were required by law to disclose their earnings -- they no longer need to do so. They can now compete with big tech on salaries for researchers, which was another big push towards a profit-driven model for what was once supposed to be a non-profit for safe AI development.

    Researchers raised red flags then, and I bet they were all muttering "I told you so" under their breaths about 2 weeks ago.

    Sora is different.

    Many were quick to point out, after OpenAI revealed it's new "data-driven physics engine", that it is equally impressive as it is terrifying. Sora is able to generate photorealistic video from natural language prompts, and does so freakishly well for a first-generation offering. In a day, Sam Altman demonstrated Sora's capabilities in recreating some of Earth's most wonderous sights, most beautiful landscapes, and of course, our cutest pets. it convincingly generates videos that are almost indistinguishable from real videos that we might take on a day-to-day basis, and yet none of the subjects exist in real life. It is an incredible feat of computing, and part of me remains in awe of the amazingly smart people who must have spent countless of hours making it work.

    Here's a question: Did no one in OpenAI, as smart and intelligent as they must be, question what value Sora brings to humanity?

    I've been asking myself that question ever since Sora was announced. What will Sora be used for? Entertainment? Industrial applications? Can it ever be a tool? Whose lives will it benefit?

    And if there's no satisfactory answer to those questions: What did we sacrifice in this pursuit of the next shiny thing?

    What the **** did we just do?

    Because Sora's dangers isn't about "destroying hollywood" or this immensely hopeless election year in the US. Corey Brickley Mind Palace was right. It undermines our real world in unprecedented ways.

    Because video isn't just about entertainment. It is our way of recording the passage of time in a very, very finite life.

    For humans, there is no better, more "raw" way of recording our lives, our stories, than in video format. Every frame of a landscape is meaningful not only because of its composition, or colour-grading, or frame rates. All those aspects of a video are meant to bring out the best in the world we look at every day, from the day we were born to the day we return to just being boring old stardust. And at the expense of sounding overly sentimental (something I've always been against when talking about AI), video is the rawest form of recording the human condition and our struggle with everything. The loss of a cat. The snow reminding us of people we once knew. Places reminding us of bygone times. The light, the sounds, yanking us from the present into our memories.

    Sora is different from ChatGPT, in that the latter has immense potential to fundamentally change the way we communicate with computers and each other. Sora primarily targets one of our most fundamental ways of recording our lives and self-expression. Of art.

    And therein lies my point of the day.

    The use of generative AI in art forms should be regulated by legislation.

    i.e...

    (can i say the bad word?)

    Bans. Oops, controversial. Might get cancelled by fellow tech bros.

    Art is created by and arises from human struggle. Always has been, always will be. Sora takes away that ever-crucial creation step. It takes away arguably the only good and beautiful side-product that comes exclusively from the fact that life is a brutal, bloody fight. That life is a day-in, day-out 9-5 with sprinklings of laughter and happy times. Take that away, and human struggle means nothing.

    Ironically, or rather expectedly, the most dangerous part about AI is us. Again.

    Xu Jialu

    Xu Jialu

    author. i am a cs student at ntu singapore and i sometimes write articles just for the heck of it :D

    Related Posts