Click here to subscribe today or Login.
One of the most interesting and divisive trends that’s emerged in the past few years had been the dramatic upsurge in the capabilities and prevalence of AI across multiple mediums — video, photo and text. AI is being integrated into our daily lives at every level — being built into software, cars, digital assistants, social media platforms — you name it, it’s there.
But as with everything revolutionary, it comes with drawbacks — substantial ones. For most of us, it’s like a black box. You put a prompt in, and it spits back an answer, or an image. The quality of that answer or output can vary dramatically depending on how skilled you are at giving the AI it’s initial prompt, the type of prompt you’ve provided, and the specific AI platform you’ve chosen and what it’s training has been. Usually, the output is at least passable. I like to describe what you get from AI as similar to the work provided by a skilled intern — you definitely have to check it because your mileage may vary.
The issue is, all of that information has to come from somewhere, be interpreted by various AI models, and synthesized into a coherent result. The first problem: garbage in, garbage out. If the source information is incorrect, so too will be the answer or output you receive.
The second problem: there’s a lot of copyrighted material floating around online, and if the AI happens to scrape that source data — scraping in this case means automatically retrieving information from a website — what you get might well fall under the definition of “plagiarism” — not an issue if you’re wanting the best pizza places in Old Forge, certainly an issue if you’re writing a blog or trying to use it to help create digital artwork.
Perhaps most portentous — sometimes even the people who develop these AI platforms can’t answer definitively why they give the responses they do. AI is capable of self-teaching and self-training to a degree, so you wind up with what you might call “emergent properties” — unexpected outcomes or results that materialize because of complexity. No, I don’t think AI is close to taking over the world. But I do think that the nature of the technology demands some guidelines, specifically that it should be able to explain why you’re getting the answers you’re getting and where they’re coming from.
My stance on it is more or less a cautious agnosticism — it’s a great assistant, it’s a huge time saver, and for coming up with quick solutions or information that require more nuance than a typical Google search, it’s excellent.
The danger is that while right now, one can often — but not always — tell that they’re looking at AI generated content or imagery, the technology is getting better and better at producing human-like output. The risk here is that someone could theoretically just punch in “Give me a picture of person X doing questionable thing Y”, and it will spit out just what you want and you could present it as a real photograph or video. AI detectors are already a thing, but they’re far from foolproof.
As with any technology — you can’t put the genie back in the bottle, and it’s already at the point where regulations on how it’s used can be sidestepped. The moral – and pretty much the only thing we can do in an increasingly ambiguous world — is to think critically about what we see or read.
Nick DeLorenzo is the CTO of the Times Leader Media Group and CIO of MIDTC, LLC. He is from Mountain Top, Pennsylvania and has covered technology for the Times Leader since 2010.