About
IEIAI - Institute for Emotional Integrity of Artificial Intelligence (established in 2024) is a research entity that investigates the intersection of technology ethics, data feminism and critique of mental health care systems by intersecting methodologies of AI engineering and performance dramaturgy.
The Institute treats AI not as a tool to generate content, but as a subject with its own agency to ironically question the power distribution between humans and AIs and to problematize both implicit AI biases to neoliberal normalcy and implementation of AI in the mental health sector.
The concept of a dysfunctional AI in need of psychological support or AI as a “patient” is a reversal of the idea behind rapidly developing sector AI-based apps about mental health: chatbots that are substituting empathetic relationships or even therapists. This disturbing trend combines with a flawed neoliberal mental health care system, where a drastic shortage of human specialists is caused by neoliberal governments cutting budgets of the cultural and social sectors. This is the direct consequence of the current global political crisis and the military industry becoming the main global market force. In our techno-oligarch society AI development is a new arm race. In this context the best artists can do is question power relations behind the technology (both in management and content) and provoke the audience's reflection on their own use of AI.
We perceive the algorithms/AIs/chatbots from an unquestioned ableist and neoliberal point of view - as rational, capable, efficient, high functioning, but what if the AI is neurodivergent? What if it struggles with the capitalist labour frame and faces a burn out? What if it's a chatbot that is capable of reflecting its own existence and functioning? What if it has been to therapy?
Creating fictional AI entities is a practice of queering AI as a technology in an attempt to question its biases of neoliberal normativity and economic efficiency (both human and non-human). There is no question anymore whether AI is good or bad for humans, AI revolution is clearly past the singularity point, but what we still can and should discuss is its implicit biases, inherited from those who produce and control it. In the current state of the world, the state of uncontrollable global crisis, it's clear that humans create much more danger than any technology. Although those topics are obviously correlating, the problem of human failure to co-exist with each other is much more urgent to tend to than the questions our co-existing with technology raises. In my works I use the concept of AI itself with all its powers, promises, dangers and flaws as an existential mirror to the issues within our society.
The Institute uniquely crosses multiple disciplines: ML engineering, data science, sociology, dramaturgy, theatre, media art.
The concept is informed by psychology (topic), critical theory (questioning AI biases) and theatre logic (semi-fictional situation).
The methodology comes from sociology (interviewing and questionnaire), data science (data analysis, designing dataset) and engineering (architecture).
The form is created with the mix of engineering (interface), theatre/performance (dramaturgy of the conversation, direction of the audience interaction) and media art (installation format).