ɫֱ

 

Ask the experts: Where will artificial intelligence go next?

- June 5, 2023

How to regulate the development of AI and provide guidelines and structure for safe use is among the key questions facing society now. (Google DeepMind/Unsplash)
How to regulate the development of AI and provide guidelines and structure for safe use is among the key questions facing society now. (Google DeepMind/Unsplash)

Artificial intelligence — or machine learning — is coming on stream quickly, with new chatbot tools and plug-ins being added to the digital landscape every day. And as they come online, so too do questions over how to contend with seemingly limitless technology that some worry will upend the workforce, allow false or inaccurate information to proliferate and blur the lines over what is real and what is not. (Recently, an explosion near the Pentagon, sparking widespread confusion before it was determined to be an AI-generated fake).

Companies are rushing to figure out how to incorporate generative AI and its ability to perform routine tasks faster and cheaper than humans, while people working in coding and programming, writing or communications, legal and other professions are nervously wondering about the fate of their jobs.

Pressure is also building for governments and policy makers to introduce regulations to control this burgeoning technology. Even those who were instrumental in developing ChatGPT and other so-called large language models (LLMs) have pushed for a slowdown in the acceptance of AI, going so far recently as to that it could lead to human extinction if not adequately regulated.

To learn more about where we are and where we are headed with AI, we spoke with three experts in the field: Brian Hotson and Ayesha Mushtaq of the Faculty of Open Learning and Career Development; and, Christian Blouin of the Faculty of Computer Science.

How do you stack up the pros and cons of having this technology? What are some of the good and bad things that it can do for us?

Brian: I was creating a survey for students and I needed to input a column of months and years from 2010 to 2023. I asked ChatGPT and it created one for me in a couple seconds. It saved me time typing out the list — I'm a horrible typist. So, these are the kinds of things where it is really, really good — sorting out a reference list alphabetically and putting it into format, for example. But it's not good at creating knowledge because it cannot create knowledge.

So, it may be a good tool for basic activities like that — producing a list of dates, for example — but when you get into more nuanced things maybe we shouldn't rely on it too much?

Brian: Yes, I asked it to write an essay about itself with citations, and it wholeheartedly created a reference list for me, but none of it was correct. It had made up every reference. It pointed me to the right website and I could then go and find the right information, but every single reference — about 12 to 13 — was manufactured by ChatGPT. It was beautifully formatted, but it was empty of any meaning. In fact, it was duplicitous.

Ayesha: I think as educators we need to think about what value AI is providing to students and how do we maximize that value in our learning. For example, it does provide the ease of not staring at a blank piece of paper for a long time and help you get over the initial writer’s block and get started. So, if we have assignments to do, we might allow some work to be done by ChatGPT, like providing an initial draft, format and structuring, proof-reading, which can then leave more time for critical engagement with the content.


(Steven Johnson/Unsplash)

How do we manage the emergence of this because there is a tension between using it for more mundane tasks, but will it overtake that and take on more responsibility and become unwieldy and unreliable. Is it happening too quickly as some have suggested?

Christian: What makes it disturbing is that it pushed the boundaries of what we believe to be inherently human. It's more decision support, writing support, or a way to automate repetitive tasks. The interesting and important intellectual contribution is still a human's decision. That boundary will keep on being pushed over and over again, and right now we are just experiencing a quickening of that pace and also a wider range of application.

To address the issue of slowing down, I think we have a responsibility to understand the legal and ethical limitations of using large language models. We cannot embrace them as de facto tools in university because there are so many issues around the ethics of how it was put together, the lack of implied consent and the legality of exposing personal information. Slowing down is an idea bounced around, but I am concerned that the technology can get away from us if ethical people pause development, and others keep on developing and cornering the AI space.

Brian: I agree with Christian. This tool is fundamentally changing education. It's actually a pedagogical issue. What we should do as educators at institutions of higher learning is to prepare for those students who are coming. My son is in high school and these students who are coming will be so steeped in this that when they arrive at our institutions, they will be expecting us to be aware of what they're doing with LLMs and know the language of LLMs. This is digital literacy, and it’s so vitally important. It also important to ask about the ownership of the data we ask students to put into tools like ChatGPT. The privacy policies of ChatGPT, for example, are egregious. ChatGPT takes every single bit of information you give it and puts it into its database — this includes emails that you send them and chats. So, what is our responsibility as universities to provide faculty and students with kinds of literacies to understand what these tools are doing, what they're taking and what they're giving back? These are the things we need to keep in mind.

Ayesha: For the longest time we’ve had this idea that instructors are the sole bearers of knowledge in the classroom, which isn't true anymore. Knowledge is so readily available these days, so we have to now reimagine how we run our classes. Instructors can no longer just be providing information that can be had from other sources. I think we must transform the way we spend time in class with our students. And for this to happen, we should try and make our classes as “applied” as possible where we are emulating real-life professional situations and engaging with course content via discussions, critical thinking and thought leadership within our fields.

There seems to be a lot of consternation and concern now about this. Were we a bit caught off guard by this or are we overreacting?

Christian:I think we're going to be permanently caught off guard. It's just the beginning of it.

Brian: If you look at which companies are putting LLMs in their systems, the whole point of these plug-ins is to hoover up data. There’s a new LLM called Humata, for example, where you upload every single document on your computer into this tool, allowing you to access your data using a ChatGPT-like interface. You're giving all of your data to this tool, which it will use for itself in one way or another. As author Shoshana Zuboff says, we are just the carcasses of surveillance capitalism. These corporations simply want our data. There's no conspiracy theory here. This is what these tools have been doing for years, whether it's Facebook following you all around the internet, these tools are going to make it more efficient to collect your data. What LLMs give back to you will make it seem more human and accessible but at the same time what are we giving up for this?


(Viralyft/Unsplash)

Perhaps this is a reckoning point since many of us have been unwitting participants in the collection of data. Where do you think this might be heading?

Ayesha: We need to stop finding educational solutions to political issues. Education guidelines can only provide temporary solutions, so I think in the long run a political, guided and informed solution is needed on how to use these tools.

Christian: I think the technology is going to melt away into tools that we are already using. Sometime soon, Outlook may get better at summarizing and organizing emails, for example.

But I am oddly optimistic about the whole thing. First of all, we have chronic short staffing and a lot of the work we do is repetitive, so I think that having these technologies being rolled out carefully is actually going to bring relief in automating repetitive tasks, for example. I think rolling these technologies into tools isn't going to lead to massive downsizing and people being replaced because they are only support tools. We'll be able to ask what can we do better?

Brian: In Ontario, there was a survey of Grade 9 students done by the provincial government. Only 55 per cent said they had strong connectivity to the internet, so the only way we're going to be able to use these tools is by everyone having effective connectivity to broadband. The UN has made access to the internet a human right. These tools are useful only when we all have connectivity. It's an issue of socio-digital justice, which my colleague Stevie Bell and I write about.

There is a lot of talk now about trying to regulate AI. Is that something we should be looking at?

Ayesha: I will go back to my point about the need for political solutions as AI is very much an issue of access and equity. If you look at the cost of the device you need to access ChatGPT or other software, a lot of people don't have that purchasing power. If the pandemic taught us anything it is that technological accessibility is uneven terrain. We can't stop the growth of technology and we shouldn’t, but we can take control of the steering wheel on how it is unfolding and make sure we are providing a fair playing field for all our students and staff.

Christian: I think legislation is the responsible thing to do. We can't stop the private sector from developing products, but we can set guidelines where it becomes safe for citizens to use. I think it needs to be restricted so it's not left to start-up companies to decide something is appropriate because it is possible.

(Google DeepMind/Unsplash)

The message I'm hearing is that it's here and we just need to equip ourselves to deal with it. Is there a word or qualifying term to express how you're feeling about this new reality?

Brian: It's like what Aristotle said about writing — that it would destroy knowledge because oration was the mode of learning at the time. We've been handwringing about changes like ChatGPT forever. And we're just at the very beginning of its impacts.

Ayesha: I work with language educators so I have been hearing, 'We can't allow it because it's just doing the work for the students.' So, I think we need to calm down first, embrace the technology by educating ourselves and demystifying it for ourselves and for our students. I think making peace with its existence, getting excited about it and pushing ourselves to reimagine teaching and learning alongside AI is the future — that's how we should move forward.

Christian: I think we need go through the five stages of grief really quickly, and move on to acceptance, and proceed thoughtfully.