Extended intelligences
A course on the implications (both social and technical) of implementing Artificial Intelligence technology to projects and in daily life.
How often do we use Chat GPT? And with what purpose?
This is the phrase that we started the course off with. At a glance it's a simple and straightforward question, you either use it for homework or inspiration or to solve some problems outside of your field of expertise and knowledge. However under this seemingly innocent query, lies a more important issue: How did we get here? To a point in our lives where we consult things with a "machine" that seemingly has all the worlds knowledge, instead of finding the answer ourselves. The course was a means to analyse our relationship with this technology and what we can get from it, as well as what it can get from us.
What is AI?
For me, Artificial intelligence is an illusion of intelligence, itβs just an algorithm that was trained to taught what to output based on certain input.
3 Key learnings from the first few days:
AI as a mirror
AI reflects humanities values, biases and limitations. This can be amplified in the future
This technology can lead humanity to a place of danger (or get us out of there) β> perception and usage
Importance of wording
The use of the correct terminology when referring to AI is important
The social impact of AI
Capitalism, climate crisis, justice, infrastructure
All of these learnings represent the (mostly) dangers of AI.
To be honest before the course started I had imagined that we would be fully immersed and fully encouraged by faculty to lean into this technology and start to integrate it a bit more into our own processes; this was not the case however. For starters, I think we all know all about the benefits and ways of using AI in today's world; but we don't have the full picture. We were missing a few key points that will influence the way we use AI:
Environment
Bias
Time
Environment
As most of us know, as much as it feels like it, AI is not magic. It's actually a really energy intensive technology that demands a lot of resources to keep it running and keep giving consistent results from both a financial perspective as well as a UX perspective.
As all new and emerging technologies, most companies run to this to keep up with the demand from the masses to fulfill this new user need. So this results in an avalance of funding in the research of the technology, which only worsens the situation in the case of resources needed to keep it running. Let's also remember that there are more types of resources that AI uses including: energy, water, human labor, materials, and heat.
Bias
There is ONE big drawback from a seemingly neutral and advanced technology that points to the future of reference and knowledge keeping: bias. Where is this bias coming from?
Let's remember the way AI models are trained, first they need a dataset form which to get the information needed to function.
Ok... so where do we get those datasets?
-From already existing books, articles, videos, etc.
Ok... done. Now what?
The problem lies in a couple of words: "already existing". As we know there is a LOT of content out there, from which we can get the data from to train AI models; the problem is that we are in charge of feeding this info into the AI. Humans have a slight inherent bias, therefore books have bias, movies have bias, articles have bias, in the end we end up with a biased AI model.
This can be managed in the future, but for now what we can do is be mindful of this bias present in every response from our preffered AI models and take them with a pinch of salt.
Time
One would think that time has nothing to do with AI. To a certain extent, this is true; the relation between time and AI lies in our perception of time in a social context.
We live in a world that has an accelerated pace, this is in part a consequence of technological advances. We can get this image of a cycle where our actions affect the outcome and these outcomes further affect our actions.
Technological acceleration β acceleration of social change β we also assume we must accelerate our pace of life π
From this insight alone, is enough incentive to analyse our relationship with time and the ways we use technology can further affect this perception of time.
Once we had already kinda fallen out of love with AI due to the immense implications it has on the planet, society and ourselves; we started to understand it on a more practical and technical level. It was time to CODE.
Group project
For this we started with an exercise to guide our project where we would be implementing the technical tools learned.
This is my groups objective for the exercise:
We would like to build an AI tool that helps MDEF students or IAAC students to get a better understanding of what their true interests lie and which direction might they benefit from taking in regards of design exploration.
During the weekly seminar looking over the technical requirements to build an AI model and learning the necessary terminology and processes behind it, we started to investigate how we could make our idea into reality.
This resulted in a lot of research, experiments, and failed attempts at building the model.
Our goal was translating our idea into something where we could obtain some sort of output. This resulted in us trying to visualize a latent space where students' profiles and all the cards from the Atlas of weak signals deck would both coexist in the same area.
The idea was that you as a student would be able to locate yourself inside this latent space and see which cards are more closely related to you, as well as which students are nearby.
Course's key learnings and conclusions
Not to rely on this technology as a βmiracle solutionβ that will be used long term as it is used today.
We need to change the mindset of this tool. Right now we are in the boom of the technology.
We need to take responsibility and accountability for our own use.
Keep in mind the impact that this digital technology can have in the physical world.
Change the objective / purpose from an economic growth model to a collective wellbeing approach
Find ways of dismantling or pivot from a capitalist system to a more respectful care economy.
Last updated