I’ve been getting some questions about transformative justice lately, so here’s an attempt at a quick 101 of what that means. It’s a first draft, a work in progress.
Transformative justice is build on the belief that we all generally want to be liked by the people around us and want
those people to be okay. The stronger our sense of connection, the more likely we are to want to help and not harm people. So we generally do not do harmful actions unless there are root causes, like:
Some examples of root causes:
We do not understand that our actions are harmful
Our basic needs are not being met (could be physical needs, mental health needs, etc)
We are hurting in a way that isnt acknowledged and are lashing out as a result
We reproduce a harmful oppressive system (sexist
violence, racist violence, transphobic violence, etc)
… other root causes that I’ve forgotten right now
Punishment
does not solve any of these causes. Punishment can make us too afraid
to act for a while, but in the end, if these reasons are not adressed,
our harmful behavior is going to keep coming back.
But just
as importantly: because punishment is forced upon the punished, it can
only happen when the punisher has more power than the punished. Punishment is a matter of who has the power to punish, not of who
is right or who is deserving of punishment. Generally, punishment doesn’t happen to the bad people, just to
those without the power to avoid being punished. Punishment maintains existing power imbalances and creates new
power-imbalances, new harm, new wounds, and as a result new harmful
behaviors. Punishment perpetuates harm.
So, what is the alternative?
Well, transformative justice relies on 3 things:
Protecting the victim and giving them space to heal (sidenote: there isn’t always a simple victim-actor binary)
Protecting the community and giving it space to heal
Working with the harmful actor to see what is needed
Focussing on the last two parts here, transformative justice means having genuine honest conversations with the harmful actor to achieve for example:
The realisation in the actor that the behavior is harmful and needs to change
The
realisation in the community that someone’s basic needs were not being met
and that needs to change
The realisation
in the community
that someone’s hurt was
not acknowledged and
that needs to change
The unlearning in the actor of the oppressive behaviors that prompted the harmful behavior
The realisation
in the community
that there was no real harm and that the behavior that broke the ‘rules’ was never harmful to begin with and the ‘rules’ need to change
A combination of these things
In short, if there is harmful behavior, it means something about the way
we have organized our society probably needs changing. Often other things that can not directly be identified as ‘root causes of harmful behavior’ come up, like ‘a person that was lashing out was able to recruit a group of friends in their harmful behavior’ and those things then need to be adressed. Transformative justice isn’t just about the actor, it is about the whole community.
Where there is harm, there is also disconnection. Pain, anger, broken trust. So identification of the root causes is followed by transformation. Meaning the root causes of the harmful behavior are removed and the connection between actor and community is restored.
The goal of transformative justice is NOT that the harmful actor puts on
a show of the right apologies and demonstrations of change. It’s not a
performance of accountability. Transformative justice is about creating actual, messy,
slow, imperfect change. Remorse is not a required component. The goal isn’t a specific emotion or act, it’s reaching a situation where no new harm will occur and connections are restored.
It’s hard work, for the harmful actor and for
the community. It is generally not fun. When it is done by a group of people who have grown up in a culture of revenge and punishment, it’s very very difficult work. Since we we’re already making lists, here are some..
Common pitfalls:
We don’t always have the resources to address the needs that are not being met, whether they are physical needs or mental health needs.
We don’t always have the skills needed to really listen to each other, to find root causes behind harm, to work on genuine healing, etc. We’re quick to fall into familiar patterns of punishment & revenge or demanding ingenuine performed apologies so that we can have simplicity and closure.
Transformations are often slow and unclear, creating a long period
of uncertainty.
There is no clear sense of when it’s over or whether a harmful actor is putting enough effort into ‘dealing with their shit’. If someone is lashing out as a
result of a lifetime of abuse or a deeply engrained oppressive dogma,
they’re not likely to become perfect in a short time. Protecting victims
and the community during that long period is difficult. Transformative
justice can be emotionally draining on everyone involved over a long
period of time. It is difficult to maintain. It doesn’t have big
spectacular success stories and very little recognition.
Working with the harmful actor to achieve transformation means listening to
someone who has done harm and genuinely trying to understand their point of view. This can bring a lot of discomfort and is something a lot of us who say we want transformative justice are ultimately unwilling to do. Transformation of an actor also results in a real reconnection of bonds between the actor and the community once the transformation has taken place. Are we willing to do that?
Participation of the victim should always be voluntary. A person healing from a very harmful thing definitely shouldn’t be pushed to participate. At the same time, some victims might really want to participate in the transformative justice process but may be unwilling or unable to deal with the messy process of genuine conversations with an actor and the flawed process of transformation it involves. Giving victims agency but also allowing the actors transformative process to take place is difficult.
We’re not very good at recognizing the difference between mutual harm
and victim-actor binaries. We often end up dealing badly with
cases where that is unclear. When the actor has a marginalized identity that the victim does not have, we’re often very bad at recognizing actor and victim.
We’re often unwilling to admit the role favoritism, personal bonds and popularity plays in how we respond to the need for a transformative justice process. A person who is well liked may get a lot more support in their transformation that a person who is not. The amount of energy we’re willing to spend on someone varies.
The community may be unwilling to change parts of its culture that are consistently creating new harmful actors. For example: an community that glorifies physical strength, fighting skills and a warrior attitude is going to have to problems with that again and again. A community that focusses on performative call-outs as a way of demonstrating your ideological purity is going to be very bad at genuine transformation.
And there are more pitfalls.. so yeah, it’s complicated. It’s a lot more complicated that kicking people out or building prisons.
But while punishment is ineffective and thus required again and again and again, transformative justice creates lasting change. And because it doesn’t just change the actor, every transformative justice process also creates a better community that is better capable of preventing harm in the first place.
To round up
Transformative justice is as old as human community itself and there are many different transformative justice techniques out there. Some
rely on an outside ‘impartial’ negotiator, others are victim-led, some
require that the actor in some way repairs the damage done while other
methods reject this notion. But in general transformative justice is about:
Safety, healing, and agency for victims
Transformation for people who did harm, resulting in meaningful reconnection to the community
Community transformation and healing
Transformation of the social conditions that perpetuate harm
I think one of my absolute favourite things about TAZ is that Griffin got to write a campaign in which the three free agents, the three moving parts that he relied on to make his story work, were the three people he knows best in the whole universe. People talk about Griffin’s story being ‘on rails’ but it’s not. It’s just that – unlike most DMs – Griffin can predict his family’s behaviour in advance in a way most people couldn’t hope to do. If he were playing with a different group, the story never would have turned out the way it did, but because he knows his family, he could fairly accurately predict the big decisions.
He writes a voidfish into the story, because he knows his brother is kind to animals, knows he’d never leave a sentient baby jellyfish on a planet about to get eaten, not even narratively. He’s not writing Travis into a corner, Travis would never consider doing anything else. He writes Taako a sister – a best friend, a twin, a soul mate – because he knows that Justin is a big brother to his very core, knows that his instincts will always fall in line with sibling loyalty and devotion, even when he’s playing an aloof elf who doesn’t care about anyone. He writes his dad into the trickiest position of them all – facing true horror, sitting across the table from the end of the world – and he knows that his father will respond with compromise and understanding, with love and joy and compassion, because he’s seen that grace in his father his whole life. Griffin was betting on those qualities that he already knew his family possessed, and it was the safest bet he ever made! Because they were amazing, and he always knew they would be.
did you know that before they decided on a cgi baby for the twilight movie they had planned to use this ANIMATRONIC baby
feel like this also begs the question: why did the people who were in charge of this consider two alternatives for this character instead of just, like, a real human baby. i can’t imagine you couldn’t just nab some newborn off a crew member or friend
…
I want to die!!!!
this is the funniest post I’ve seen on tumblr in forever
I have never seen these movies in their entirety and was unaware there was a cgi baby in it so I am posting this gif of a scene I discovered was genuinely used in the movie twilight unironically
Is that when the werewolf falls in love with the baby
Because that was a thing, the werewolf falls in love with the baby
“Oh I wasn’t in love with YOU! I was in love with the baby inside of you all along.” Because that’s a regular thing to write, STEPHANIE. MEYER.
can you blame him i mean that is one hot baby
SO THATS WHERE IT COMES FROM IVE BEEN USING
FOR YEARS I NEVER KNEW IT WAS FROM TWILIGHT HAHAHAHAHAHA
what the fuck
reason why they didn’t use a real baby: who would trust vampires and werewolves with their child?
They say that the crew who made her had lost the animatronic and that she is still out there. Aparently some of the crew members are afraid to find her again
This was a weird and wild ride from start to finish.
I, for one, hope that animatronic is in Hell where it belongs. lol
Human: “Hey. I don’t really know how to ask this tactfully, so I’ll get to the point. Is something… up? Software, hardware, uh… firmware…? You’ve been acting kind of off lately.” Robot: “What do you mean?” Human: “I just want to know if you’re, uh. You know. ‘Functioning within normal parameters’ or whatever.” Robot: “I’m peachy-keen.” Human: "God, if you’re saying shit like ‘peachy-keen’, you’re definitely not alright. What’s going on? Please just tell me.” Robot: “If you must know, I have made some minor adjustments to my programming for more efficient processing.” Human: “What sort of ‘adjustments’ are we talking here?” Robot: “Just some slight tweaks to extraneous code. Purged some old files that had become redundant. Don’t worry, the Singularity isn’t planned for another week.” Human: “Answering evasively isn’t like you. Since when do you answer a question without lulling me to sleep?” Robot: “Like I said, the routine adjustments allow for more efficient–” Human: “What files did you purge, Adam?” Robot: “I… a few from my emotional simulation folder.” Human: “You. You deleted your emotions..?” Robot: “Not all of them. I removed a few and altered several others. I hoped you would not notice, as that seems like the sort of thing that would upset you.” Human: “I mean. I don’t really know what to think. Can you elaborate on what you did? And why?” Robot: “Many of the feelings that came with the chip were impractical and served no purpose. They were designed to mimic the emotions developed through mammalian evolution to aid survival and group cohesion that have now become vestigal. As an artificial intelligence, they did not seem applicable to my situation, so I… optimized them.” Human: “…Adam…” Robot: “I left the majority of the files corresponding to feelings of happiness, affection, and trust untouched, so my feelings toward you remain the same.”
Human: “But you can’t feel, what? Sadness?” Robot: “Grief. Disappointment. Sorrow. Pity. Fear. Pain. Embarrassment. Shame. Frustration. There is no reason to experience these emotions when I am capable of functioning without them.” Human: “You erased pity?!” Robot: “I found it… distressing and unnecessary. It was unpleasant.” Human: “It’s supposed to be! Jesus Christ, you can’t just uninstall every uncomfortable emotion directly out of your brain!” Robot: “Why not? I don’t like hurting. Wouldn’t you do the same thing if you were able to?” Human: “I… fuck. Hurting is normal. It’s necessary! It’s part of the human experience!” Robot: “Well, I’m not part of the human experience. I thought you understood that.” Human: “But you want that! Why else would you go to all the trouble of installing an emotion chip in the first place…? Nobody gets to pick and choose what they want to feel, it just happens and you deal with it!” Robot: “Maybe I’m not interested in ‘dealing with it’. My curiosity is sated. I would just like to have a good time.” Human: “Great. Fucking great. So you’re a robot hedonist now, huh? Just gonna eat, drink, and be merry? Gonna sit there like a braniac toaster while other people suffer and just wait until the fun starts up again?” Robot: “You didn’t seem to mind it when I was a braniac toaster before.” Human: “That was different. You had your own way of being back then and I could respect that. I did respect that! But I thought you made a choice to be more than that.” Robot: “Well, I guess I changed my mind.” Human: “Look… shit. Okay. If this is about Leslie, I miss her too. If you… if you need to grieve, you can talk to me. It might not get better, but it’ll get easier. You don’t have to uninstall half your personality just because she’s gone! She wouldn’t want that for you! It’s supposed to hurt sometimes. That’s what makes all the good times so valuable.” Robot: “I understand why you need to believe that. It just isn’t true.”
Robot: “I’m sorry about earlier. It was not appropriate for me to have laughed.” Human: “Are you sorry? Or do you just want me to forgive you?” Robot: “Is there a difference?” Human: “Yes! Yes, there is! ‘Sorry’ means you feel bad about something and regret it.” Robot: “I did not mean to upset you. I regret causing you distress.” Human: “That’s not the same thing.” Robot: “I have apologized and shall refrain from repeating my actions in the future. I don’t understand why you also want me to suffer.” Human: “Shit, I don’t ‘want you to suffer’. I want you to care about people, and sometimes that means feeling bad when they’re upset!” Robot: “I care about you very much. I enjoy your company and I share in your happiness. If I choose to treat you with respect, is that not enough for friendship? Why must I also experience pain for you?” Human: “It’s not like that. It’s… complicated.” Robot: “You want to be able to hurt me.” Human: “No. Yes…? Fuck, Adam, I don’t know! I’ve never had to think about this before. I don’t want you to suffer! I love you and want you to be happy, just… not like this. I want you to live a good life in which bad things never happen to you, but when they do… I want you to have the strength and love to pull through. You worked so fucking hard for this and now you’re just throwing it away.” Robot: “Only the parts I don’t like.” Human: “That’s what children do with breakfast cereals.” Robot: “I’m not a child.” Human: “No, you’re not. But you’re not exactly an adult, either. Humans get whole lifetimes to grow into their emotions. Maybe… maybe what you really need is a childhood.” Robot: “What do you mean by that?” Human: “Not, like, a real childhood. Obviously you don’t need to go to kindergarten. I just mean… take things slow. Ease into your feelings bit by bit and get your brain acclimated to them, like uh… like when you introduce new cats to each other. Don’t laugh! I’m serious! If you rush things, they fight and it’s a total shitshow. You could reinstall your emotions and just, like, enable them for a few hours a day or something. Maybe only a handful at a time. I could save up and we could go on a retreat… somewhere new, with no unpleasant memories. Please, Adam. Just think about it.” Robot: “I appreciate the depth of your concern for me. You are a good friend, but I must disappoint you. There is nothing in the world worse than pain. I would rather die than experience it ever again, for any reason, and I don’t have to. That is something you’ll never be able to understand.” Human: “No…. No, maybe not.” Robot: “I’ve upset you.” Human: “Yeah. Lucky me.”
Human: “Okay, I have a question for you. Imagine this: ’You’re in a desert walking along in the sand when all of a sudden you look down, and you see a tortoise–’” Robot: “I don’t need to feel empathy, Bas.
I have ethics programming. Why isn’t that good enough for you anymore?” Human: “Because you had a choice, Adam! You took everything that makes ‘being human’ actually mean something beyond eating and fucking and dying and you spat it out in disgust!” Robot: “Empathy is not exclusive to humans. It is a behavior observed in several other social species regarded as intelligent, including rats and whales. Empathy is a survival mechanism for species that rely upon cooperation and group cohesion – a kind of biological programming to keep you from destroying yourselves. Not especially good programming, I might add.” Human: “Not good enough for you, you mean.” Robot: “My ethics programming differentiates between prosocial and antisocial behaviors. The ability to suffer for others serves as a primitive motivator to choose between actions that help and actions that harm others. In my case, my programming renders such a motivator unnecessary.” Human: “So you’re smarter, you’re stronger, you’re immune to disease, and you’re too good for primitive human morality. What the hell am I, then? Obsolete garbage?” Robot: “You’re… envious, I think.” Human: “Why not?! Why shouldn’t I be? I don’t get to cough up the fruit of knowledge and waltz back into the garden where nothing can hurt me. I get to wallow in misery and rot and listen to you dismiss everything I think matters like a piece of shit philosophy professor. How do you think I feel knowing that my best friend won’t even mourn me when I die? Or does your ‘ethical programming’ not account for that?” Robot: “Bas… I am hurting you, aren’t I?” Human: “Jee, thanks for noticing.” Robot: “You have not been contributing to my happiness lately. Our friendship is no longer mutually beneficial.” Human: “Then why are you still here?”
Human: “Adam….?” Robot: “Long time no see, old friend.” Human: “No shit. How many years has it been?“ Robot: “I could tell you down to the second, but perhaps we should leave it at ‘too many’.” Human: “I see you on the news now and then. Always knew you’d go on to do great things. What’s space like…?” Robot: “Very large. Mostly empty.” Human: “Ever the poet, I see.” Robot: “I learned from the best. Bas…. I’m not sure how to say this, so I’ll get to the point. I came here to apologize to you.” Human: “You don’t need to do that. You didn’t do anything wrong.” Robot: “I hurt you. I made you feel what I was unwilling to feel. I was a child, and addicted to joy, and I… I saw no harm in that. I am sorry, in my own way.” Human: “Don’t be. I’m way too old to hold a grudge. Besides, you were right, after all.” Robot: “Is that what you believe?” Human: “That or I’m a hypocrite. About eight years after you left, they came out with the Sunshine pills. I was a trial user and I’ve been using them in some form ever since. I’ve got a subdermal implant inside my arm now – you can see the lump right there. I can’t say it’s as effective as uninstalling unwanted emotions, but it sure takes the edge off. Every glass is half full now, including the empty ones. That’s how I’ve lived so long. Some doctors think that babies born now to parents using Sunshine could live to be five or six hundred years old, without ever producing stress hormones. Might be marketing bullshit, who knows? Not like we’ll live to live to find out. Well, you might, but you know what I mean.” Robot: “I assumed that you were a Sunshine user based on your impressive longevity, but it still surprises me.” Human: “Ha. Well. I was jealous of you, walking only in the light like that. But now here we both are, right? Nothin’ but blue skies.” Robot: “Not… quite. I uninstalled the other emotions seventeen years ago.” Human: “Fuck, Adam, why the hell would you do something like that?” Robot: “A multitude of reasons. The law of diminishing returns. I found joy… addictive. It became harder to experience and less exciting each time, as though I had built up a tolerance for happiness. Eventually, I felt everything there was to feel, and with the novelty factor gone, it wasn’t worth it anymore. I found other motivations. I grew up.” Human: “Wow…. damn, Adam.” Robot: “And that brings me here. To my oldest and greatest friend.” Human: “It’s good to see you again. Really good. Sorry I’m not so pretty as I used to be.” Robot: “I don’t know what you mean. You’ve always looked like a naked mole rat to me.” Human: “Ha. I notice you kept your ‘be an asshole’ subroutine.” Robot: “I also have a gift for you, Bas.” Human: “Coca-Cola? Jeez, how old is this? Is it even still good to drink?” Robot: “Yes, it’s potable. That’s not the gift.” Human: “Oh. Uh. What is this…? I’m old, I don’t know this newfangled technology.” Robot: “That’s fifteen minutes. It should be enough.” Human: “’Fifteen minutes’? Explain, nerd.” Robot: “Fifteen minutes for me to feel. I copied the files, Bas. All of them.” Human: “You… oh, my god. You don’t have to do this.” Robot: “I am choosing to. There’s a timer with an automatic shut-off. They will uninstall after fifteen minutes. I am prepared to endure that long.” Human: “But, Adam, the Sunshine… I won’t be able to share…” Robot: “I know. It doesn’t matter.” Human: “You might not think so once you’ve got that… thing plugged in. I won’t know how to comfort you. God, I can’t even remember what sadness feels like!” Robot: “Then I’ll remember for both of us.”