Om AI styr krigen – vem bär ansvar för misstagen?

Om AI styr krigen – vem bär ansvar för misstagen?

Svåra moraliska dilemman om att skjuta eller låta bli är grundläggande delar av mänsklig krigsföring. Men vem ska bära ansvaret för misstag som sker i framtida krig där AI kan fatta beslut att döda? Det är en komplex etisk fråga som saknar ett enkelt svar, skriver MIT Technology Review. Enligt experter är det dock viktigt att hitta en balans i att utnyttja tekniken effektivt utan att förlora den mänskliga kontrollen. AI is making its way into decision-making in battle. Who’s to blame when something goes wrong? By Arthur Holland Michel 16 August, 2023 In a near-future war—one that might begin tomorrow, for all we know—a soldier takes up a shooting position on an empty rooftop. His unit has been fighting through the city block by block. It feels as if enemies could be lying in silent wait behind every corner, ready to rain fire upon their marks the moment they have a shot. Through his gunsight, the soldier scans the windows of a nearby building. He notices fresh laundry hanging from the balconies. Word comes in over the radio that his team is about to move across an open patch of ground below. As they head out, a red bounding box appears in the top left corner of the gunsight. The device’s computer vision system has flagged a potential target—a silhouetted figure in a window is drawing up, it seems, to take a shot. The soldier doesn’t have a clear view, but in his experience the system has a superhuman capacity to pick up the faintest tell of an enemy. So he sets his crosshair upon the box and prepares to squeeze the trigger. In different war, also possibly just over the horizon, a commander stands before a bank of monitors. An alert appears from a chatbot. It brings news that satellites have picked up a truck entering a certain city block that has been designated as a possible staging area for enemy rocket launches. The chatbot has already advised an artillery unit, which it calculates as having the highest estimated “kill probability,” to take aim at the truck and stand by. According to the chatbot, none of the nearby buildings is a civilian structure, though it notes that the determination has yet to be corroborated manually. A drone, which had been dispatched by the system for a closer look, arrives on scene. Its video shows the truck backing into a narrow passage between two compounds. The opportunity to take the shot is rapidly coming to a close.  For the commander, everything now falls silent. The chaos, the uncertainty, the cacophony—all reduced to the sound of a ticking clock and the sight of a single glowing button: “APPROVE FIRE ORDER.”  To pull the trigger—or, as the case may be, not to pull it. To hit the button, or to hold off. Legally—and ethically—the role of the soldier’s decision in matters of life and death is preeminent and indispensable. Fundamentally, it is these decisions that define the human act of war. It should be of little surprise, then, that states and civil society have taken up the question of intelligent autonomous weapons—weapons that can select and fire upon targets without any human input—as a matter of serious concern. In May, after close to a decade of discussions, parties to the UN’s Convention on Certain Conventional Weapons agreed, among other recommendations, that militaries using them probably need to “limit the duration, geographical scope, and scale of the operation” to comply with the laws of war. The line was nonbinding, but it was at least an acknowledgment that a human has to play a part—somewhere, sometime—in the immediate process leading up to a killing. But intelligent autonomous weapons that fully displace human decision-making have (likely) yet to see real-world use. Even the “autonomous” drones and ships fielded by the US and other powers are used under close human supervision. Meanwhile, intelligent systems that merely guide the hand that pulls the trigger have been gaining purchase in the warmaker’s tool kit. And they’ve quietly become sophisticated enough to raise novel questions—ones that are trickier to answer than the well-­covered wrangles over killer robots and, with each passing day, more urgent: What does it mean when a decision is only part human and part machine? And when, if ever, is it ethical for that decision to be a decision to kill? For a long time, the idea of supporting a human decision by computerized means wasn’t such a controversial prospect. Retired Air Force lieutenant general Jack Shanahan says the radar on the F4 Phantom fighter jet he flew in the 1980s was a decision aid of sorts. It alerted him to the presence of other aircraft, he told me, so that he could figure out what to do about them. But to say that the crew and the radar were coequal accomplices would be a stretch. That has all begun to change. “What we’re seeing now, at least in the way that I see this, is a transition to a world [in] which you need to have humans and machines … operating in some sort of team,” says Shanahan. The rise of machine learning, in particular, has set off a paradigm shift in how militaries use computers to help shape the crucial decisions of warfare—up to, and including, the ultimate decision. Shanahan was the first director of Project Maven, a Pentagon program that developed target recognition algorithms for video footage from drones. The project, which kicked off a new era of American military AI, was launched in 2017 after a study concluded that “deep learning algorithms can perform at near-­human levels.” (It also sparked controversy—in 2018, more than 3,000 Google employees signed a letter of protest against the company’s involvement in the project.) With machine-learning-based decision tools, “you have more apparent competency, more breadth” than earlier tools afforded, says Matt Turek, deputy director of the Information Innovation Office at the Defense Advanced Research Projects Agency. “And perhaps a tendency, as a result, to turn over more decision-making to them.”   A soldier on the lookout for enemy snipers might, for example, do so through the Assault Rifle Combat Application System, a gunsight sold by the Israeli defense firm Elbit Systems. According to a company spec sheet, the “AI-powered” device is capable of “human target detection” at a range of more than 600 yards, and human target “identification” (presumably, discerning whether a person is someone who could be shot)at about the length of a football field. Anna Ahronheim-Cohen, a spokesperson for the company, told MIT Technology Review, “The system has already been tested in real-time scenarios by fighting infantry soldiers.”  Another gunsight, built by the company Smartshooter, is advertised as having similar capabilities. According to the company’s website, it can also be packaged into a remote-controlled machine gun like the one that Israeli agents used to assassinate the Iranian nuclear scientist Mohsen Fakhrizadeh in 2021.  Decision support tools that sit at a greater remove from the battlefield can be just as decisive. The Pentagon appears to have used AI in the sequence of intelligence analyses and decisions leading up to a potential strike, a process known as a kill chain—though it has been cagey on the details. In response to questions from MIT Technology Review, Laura McAndrews, an Air Force spokesperson, wrote that the service “is utilizing a human-­machine teaming approach.” Other countries are more openly experimenting with such automation. Shortly after the Israel-Palestine conflict in 2021, the Israel Defense Forces said it had used what it described as AI tools to alert troops of imminent attacks and to propose targets for operations. The Ukrainian army uses a program, GIS Arta, that pairs each known Russian target on the battlefield with the artillery unit that is, according to the algorithm, best placed to shoot at it. A report by The Times, a British newspaper, likened it to Uber’s algorithm for pairing drivers and riders, noting that it significantly reduces the time between the detection of a target and the moment that target finds itself under a barrage of firepower. Before the Ukrainians had GIS Arta, that process took 20 minutes. Now it reportedly takes one. Russia claims to have its own command-and-control system with what it calls artificial intelligence, but it has shared few technical details. Gregory Allen, the director of the Wadhwani Center for AI and Advanced Technologies and one of the architects of the Pentagon’s current AI policies, told me it’s important to take some of these claims with a pinch of salt. He says some of Russia’s supposed military AI is “stuff that everyone has been doing for decades,” and he calls GIS Arta “just traditional software.” The range of judgment calls that go into military decision-making, however, is vast. And it doesn’t always take artificial super-­intelligence to dispense with them by automated means. There are tools for predicting enemy troop movements, tools for figuring out how to take out a given target, and tools to estimate how much collateral harm is likely to befall any nearby civilians. None of these contrivances could be called a killer robot. But the technology is not without its perils. Like any complex computer, an AI-based tool might glitch in unusual and unpredictable ways; it’s not clear that the human involved will always be able to know when the answers on the screen are right or wrong. In their relentless efficiency, these tools may also not leave enough time and space for humans to determine if what they’re doing is legal. In some areas, they could perform at such superhuman levels that something ineffable about the act of war could be lost entirely. Eventually militaries plan to use machine intelligence to stitch many of these individual instruments into a single automated network that links every weapon, commander, and soldier to every other. Not a kill chain, but—as the Pentagon has begun to call it—a kill web. In these webs, it’s not clear whether the human’s decision is, in fact, very much of a decision at all. Rafael, an Israeli defense giant, has already sold one such product, Fire Weaver, to the IDF (it has also demonstrated it to the US Department of Defense and the German military). According to company materials, Fire Weaver finds enemy positions, notifies the unit that it calculates as being best placed to fire on them, and even sets a crosshair on the target directly in that unit’s weapon sights. The human’s role, according to one video of the software, is to choose between two buttons: “Approve” and “Abort.” Let’s say that the silhouette in the window was not a soldier, but a child. Imagine that the truck was not delivering warheads to the enemy, but water pails to a home.  Of the DoD’s five “ethical principles for artificial intelligence,” which are phrased as qualities, the one that’s always listed first is “Responsible.” In practice, this means that when things go wrong, someone—a human, not a machine—has got to hold the bag.  Of course, the principle of responsibility long predates the onset of artificially intelligent machines. All the laws and mores of war would be meaningless without the fundamental common understanding that every deliberate act in the fight is always on someone. But with the prospect of computers taking on all manner of sophisticated new roles, the age-old precept has newfound resonance.  “Now for me, and for most people I ever knew in uniform, this was core to who we were as commanders: that somebody ultimately will be held responsible,” says Shanahan, who after Maven became the inaugural director of the Pentagon’s Joint Artificial Intelligence Center and oversaw the development of the AI ethical principles.  This is why a human hand must squeeze the trigger, why a human hand must click “Approve.” If a computer sets its sights upon the wrong target, and the soldier squeezes the trigger anyway, that’s on the soldier. “If a human does something that leads to an accident with the machine—say, dropping a weapon where it shouldn’t have—that’s still a human’s decision that was made,” Shanahan says. But accidents happen. And this is where things get tricky. Modern militaries have spent hundreds of years figuring out how to differentiate the unavoidable, blameless tragedies of warfare from acts of malign intent, misdirected fury, or gross negligence. Even now, this remains a difficult task. Outsourcing a part of human agency and judgment to algorithms built, in many cases, around the mathematical principle of optimization will challenge all this law and doctrine in a fundamentally new way, says Courtney Bowman, global director of privacy and civil liberties engineering at Palantir, a US-headquartered firm that builds data management software for militaries, governments, and large companies.  “It’s a rupture. It’s disruptive,” Bowman says. “It requires a new ethical construct to be able to make sound decisions.” This year, in a move that was inevitable in the age of ChatGPT, Palantir announced that it is developing software called the Artificial Intelligence Platform, which allows for the integration of large language models into the company’s military products. In a demo of AIP posted to YouTube this spring, the platform alerts the user to a potentially threatening enemy movement. It then suggests that a drone be sent for a closer look, proposes three possible plans to intercept the offending force, and maps out an optimal route for the selected attack team to reach them. And yet even with a machine capable of such apparent cleverness, militaries won’t want the user to blindly trust its every suggestion. If the human presses only one button in a kill chain, it probably should not be the “I believe” button, as a concerned but anonymous Army operative once put it in a DoD war game in 2019.  In a program called Urban Reconnaissance through Supervised Autonomy (URSA), DARPA built a system that enabled robots and drones to act as forward observers for platoons in urban operations. After input from the project’s advisory group on ethical and legal issues, it was decided that the software would only ever designate people as “persons of interest.” Even though the purpose of the technology was to help root out ambushes, it would never go so far as to label anyone as a “threat.” This, it was hoped, would stop a soldier from jumping to the wrong conclusion. It also had a legal rationale, according to Brian Williams, an adjunct research staff member at the Institute for Defense Analyses who led the advisory group. No court had positively asserted that a machine could legally designate a person a threat, he says. (Then again, he adds, no court had specifically found that it would be illegal, either, and he acknowledges that not all military operators would necessarily share his group’s cautious reading of the law.) According to Williams, DARPA initially wanted URSA to be able to autonomously discern a person’s intent; this feature too was scrapped at the group’s urging. Bowman says Palantir’s approach is to work “engineered inefficiencies” into “points in the decision-­making process where you actually do want to slow things down.” For example, a computer’s output that points to an enemy troop movement, he says, might require a user to seek out a second corroborating source of intelligence before proceeding with an action (in the video, the Artificial Intelligence Platform does not appear to do this).  In the case of AIP, Bowman says the idea is to present the information in such a way “that the viewer understands, the analyst understands, this is only a suggestion.” In practice, protecting human judgment from the sway of a beguilingly smart machine could come down to small details of graphic design. “If people of interest are identified on a screen as red dots, that’s going to have a different subconscious implication than if people of interest are identified on a screen as little happy faces,” says Rebecca Crootof, a law professor at the University of Richmond, who has written extensively about the challenges of accountability in human-in-the-loop autonomous weapons. In some settings, however, soldiers might only want an “I believe” button. Originally, DARPA envisioned URSA as a wrist-worn device for soldiers on the front lines. “In the very first working group meeting, we said that’s not advisable,” Williams told me. The kind of engineered inefficiency necessary for responsible use just wouldn’t be practicable for users who have bullets whizzing by their ears. Instead, they built a computer system that sits with a dedicated operator, far behind the action.  But some decision support systems are definitely designed for the kind of split-second decision-­making that happens right in the thick of it. The US Army has said that it has managed, in live tests, to shorten its own 20-minute targeting cycle to 20 seconds. Nor does the market seem to have embraced the spirit of restraint. In demo videos posted online, the bounding boxes for the computerized gunsights of both Elbit and Smartshooter are blood red. Other times, the computer will be right and the human will be wrong.  If the soldier on the rooftop had second-guessed the gunsight, and it turned out that the silhouette was in fact an enemy sniper, his teammates could have paid a heavy price for his split second of hesitation. This is a different source of trouble, much less discussed but no less likely in real-world combat. And it puts the human in something of a pickle. Soldiers will be told to treat their digital assistants with enough mistrust to safeguard the sanctity of their judgment. But with machines that are often right, this same reluctance to defer to the computer can itself become a point of avertable failure.  Aviation history has no shortage of cases where a human pilot’s refusal to heed the machine led to catastrophe. These (usually perished) souls have not been looked upon kindly by investigators seeking to explain the tragedy. Carol J. Smith, a senior research scientist at Carnegie Mellon University’s Software Engineering Institute who helped craft responsible AI guidelines for the DoD’s Defense Innovation Unit, doesn’t see an issue: “If the person in that moment feels that the decision is wrong, they’re making it their call, and they’re going to have to face the consequences.”  For others, this is a wicked ethical conundrum. The scholar M.C. Elish has suggested that a human who is placed in this kind of impossible loop could end up serving as what she calls a “moral crumple zone.” In the event of an accident—regardless of whether the human was wrong, the computer was wrong, or they were wrong together—the person who made the “decision” will absorb the blame and protect everyone else along the chain of command from the full impact of accountability.  In an essay, Smith wrote that the “lowest-paid person” should not be “saddled with this responsibility,” and neither should “the highest-paid person.” Instead, she told me, the responsibility should be spread among everyone involved, and the introduction of AI should not change anything about that responsibility.  In practice, this is harder than it sounds. Crootof points out that even today, “there’s not a whole lot of responsibility for accidents in war.” As AI tools become larger and more complex, and as kill chains become shorter and more web-like, finding the right people to blame is going to become an even more labyrinthine task.  Those who write these tools, and the companies they work for, aren’t likely to take the fall. Building AI software is a lengthy, iterative process, often drawing from open-source code, which stands at a distant remove from the actual material facts of metal piercing flesh. And barring any significant changes to US law, defense contractors are generally protected from liability anyway, says Crootof. Any bid for accountability at the upper rungs of command, meanwhile, would likely find itself stymied by the heavy veil of government classification that tends to cloak most AI decision support tools and the manner in which they are used. The US Air Force has not been forthcoming about whether its AI has even seen real-world use. Shanahan says Maven’s AI models were deployed for intelligence analysis soon after the project launched, and in 2021 the secretary of the Air Force said that “AI algorithms” had recently been applied “for the first time to a live operational kill chain,” with an Air Force spokesperson at the time adding that these tools were available in intelligence centers across the globe “whenever needed.” But Laura McAndrews, the Air Force spokesperson, saidthat in fact these algorithms “were not applied in a live, operational kill chain” and declined to detail any other algorithms that may, or may not, have been used since.  The real story might remain shrouded for years. In 2018, the Pentagon issued a determination that exempts Project Maven from Freedom of Information requests. Last year, it handed the entire program to the National Geospatial-Intelligence Agency,which is responsible for processing ​America’s vast intake of secret aerial surveillance. Responding to questions about whether the algorithms are used in kill chains, Robbin Brooks, an NGA spokesperson, told MIT Technology Review, “We can’t speak to specifics of how and where Maven is used.” In one sense, what’s new here is also old. We routinely place our safety—indeed, our entire existence as a species—in the hands of other people. Those decision-­makers defer, in turn, to machines that they do not entirely comprehend.  In an exquisite essay on automation published in 2018, at a time when operational AI-enabled decision support was still a rarity, former Navy secretary Richard Danzig pointed out that if a president “decides” to order a nuclear strike, it will not be because anyone has looked out the window of the Oval Office and seen enemy missiles raining down on DC but, rather, because those missiles have been detected, tracked, and identified—one hopes correctly—by algorithms in the air defense network.  As in the case of a commander who calls in an artillery strike on the advice of a chatbot, or a rifleman who pulls the trigger at the mere sight of a red bounding box, “the most that can be said is that ‘a human being is involved,’” Danzig wrote.  “This is a common situation in the modern age,” he wrote. “Human decisionmakers are riders traveling across obscured terrain with little or no ability to assess the powerful beasts that carry and guide them.” There can be an alarming streak of defeatism among the people responsible for making sure that these beasts don’t end up eating us. During a number of conversations I had while reporting this story, my interlocutor would land on a sobering note of acquiescence to the perpetual inevitability of death and destruction that, while tragic, cannot be pinned on any single human. War is messy, technologies fail in unpredictable ways, and that’s just that.  “In warfighting,” says Bowman of Palantir, “[in] the application of any technology, let alone AI, there is some degree of harm that you’re trying to—that you have to accept, and the game is risk reduction.”  It is possible, though not yet demonstrated, that bringing artificial intelligence to battle may mean fewer civilian casualties, as advocates often claim. But there could be a hidden cost to irrevocably conjoining human judgment and mathematical reasoning in those ultimate moments of war—a cost that extends beyond a simple, utilitarian bottom line. Maybe something just cannot be right, should not be right, about choosing the time and manner in which a person dies the way you hail a ride from Uber. To a machine, this might be suboptimal logic. But for certain humans, that’s the point. “One of the aspects of judgment, as a human capacity, is that it’s done in an open world,” says Lucy Suchman, a professor emerita of anthropology at Lancaster University, who has been writing about the quandaries of human-machine interaction for four decades.  The parameters of life-and-death decisions—knowing the meaning of the fresh laundry hanging from a window while also wanting your teammates not to die—are “irreducibly qualitative,” she says. The chaos and the noise and the uncertainty, the weight of what is right and what is wrong in the midst of all that fury—not a whit of this can be defined in algorithmic terms. In matters of life and death, there is no computationally perfect outcome. “And that’s where the moral responsibility comes from,” she says. “You’re making a judgment.” The gunsight never pulls the trigger. The chatbot never pushes the button. But each time a machine takes on a new role that reduces the irreducible, we may be stepping a little closer to the moment when the act of killing is altogether more machine than human, when ethics becomes a formula and responsibility becomes little more than an abstraction. If we agree that we don’t want to let the machines take us all the way there, sooner or later we will have to ask ourselves: Where is the line?  © 2023 Technology Review, Inc. Distributed by Tribune Content Agency.

Sällsynt komet passerar jorden – så ser du den

Sällsynt komet passerar jorden – så ser du den

I Sverige är det ovanligt att en komet är så pass synlig som Tsuchinshan-ATLAS. Forskaren Eric Stempels bedömer att det är första gången sedan Hale-Bopp, som passerade jorden 1995, som en komet synts så bra från Sverige. Kometen syns som något suddigt och avlångt i sydvästlig riktning på kvällshimlen. Redan en halvtimme efter att solen gått ner ska den vara synlig, strax ovan horisonten. – Man ser den ungefär en hand över horisonten om man håller handen på en armslängds avstånd, säger Eric Stempels. – En komet är en väldigt liten kropp och det är just det här stoftmolnet, eller svansen som man ser. Med rätt instrument kan man se kometen ännu bättre, och med ett teleskop går den att skåda även efter den 21 oktober, då den slutar att vara synlig för blotta ögat. Vatten som förångas Kometer har stora omloppsbanor, vilket gör att de större delen av tiden befinner sig långt ifrån solen. När kometer, som bär på mycket vatten och is, väl kommer nära solen förångas ytan och stoftmolnet bildas. Svansen som syns är riktad bort från solen. – Passerar den riktigt nära solen så kan det bli väldigt mycket av det här materialet. Då kan svansen bli väldigt fin och stark, förklarar Eric Stempels och tillägger: – Just nu har den precis passerat solen, det gör att materialet lyses upp starkt av solen och vi kan se den från jorden. Forskarens tips Kometens omloppsbana gör att det kommer ta lång tid innan den kan ses från jorden igen. Men med mobilens kamera kan de allra flesta ta en riktigt bra bild av den sällsynta kometen förklarar Eric Stempels. – Har man en hyfsat modern telefon så kan man ibland se den bättre om man tar en bild med lång exponering, än med blotta ögat. Så telefonen kan hjälpa till väldigt mycket med att lyfta bilden av kometen. Fördelaktigt väder Ett högtryck över Sverige gör att vädret kommer vara fortsatt klart, chanserna att se kometen kommer vara goda. Dessutom kommer det under kvällen (onsdag) gå att se en supermåne. – Är du i ett område där det är klart då kommer du kunna se den med blotta ögat. Det spelar ingen roll var du bor, säger Madeleine Westin, meteorolog på TV4 i Nyhetsmorgon. Tsuchinshan-ATLAS är synlig i sydvästlig riktning på kvällshimlen och kan ses fram till 21 oktober.

Stjärnparets dotter svalde batteri – blev sedan liggande i respirator

Stjärnparets dotter svalde batteri – blev sedan liggande i respirator

Allt började med att Melinda Jacobs och Martin ”E-Type” Erikssons dotter Izadora blev ledsen under kvällen. Hon pekade på halsen och munnen, men inget såg ovanligt ut när de tittade in i munnen. Dagen efter hade hennes tillstånd förvärrats. – När hon skulle få frukost gick inte maten ner, då åkte vi till akuten, säger Melinda. ”Blodet frös till is” Det har fastnat ett batteri i Izadoras matstrupe som riskerar att fräta hål i hennes hals. ”Blodet frös till is när jag fick höra det här, jag vet att det är livsfarligt” säger Melinda Jacobs. Det blir bråttom och inom en halvtimme ligger Izadora på operationsbordet. Efter det får familjen spendera fyra dagar på intensiven där Izadora ligger i respirator. – De följande dagarna pendlade vi mellan hopp och förtvivlan, säger Melinda Jacobs. ”Varje minut räknas” När batteriet kommer i kontakt med slemhinnan bildas en frätande substans, förklarar Erik Lindeman, överläkare på Giftinformationscentralen. Batteriet Izadora svalde kallas knappcellsbatteri och finns ofta i klockor och leksaker. Det är mindre än ett vanligt batteri och fastnar lätt i halsen på barn. – Det är bland det farligaste vi har i våra hem. Om man misstänker att barnet svalt ett batteri är det en ur-akut situation. Varje minut räknas, varje minut gör mer skada. Helst ska det bort inom två timmar, säger Erik Lindeman. ”Vet fortfarande inte var det kom ifrån” Nu mår Izadora bättre och ska snart få komma hem från sjukhuset. Efter att ha sondmatats i två veckor ska hon försöka äta och dricka som vanligt. Efter att ha gått igenom varje förälders mardröm vill Melinda och Martin dela sin historia för att varna andra. – Säkra era hem och gå igenom barnens rum, det här batteriet finns i många leksaker. Vi vet fortfarande inte vart batteriet kom ifrån, så vi ska gå igenom varje centimeter av vårt hem, säger Melinda Jacobs.

Sviker sitt lag direkt i Robinson – här förstör deltagaren med flit

Sviker sitt lag direkt i Robinson – här förstör deltagaren med flit

Det blir en annorlunda start i höstens ”Robinson” när tre lag tävlar om att få stanna. Det finns plats för två lag i Robinson, det tredje hamnar direkt i Gränslandet. Efter en första tävling är ett lag klara för att få stanna och bildar Lag Syd. De andra två lagen ska ställas i en ytterligare tävling. Men innan det ska Thomas Bergkvist och Roger Flink, som båda två såg till att sina respektive lag förlorade i den första tävlingen, förhandla. Det står mellan en fördel till sitt lag i den avgörande tävlingen eller en amulett som gör att man håller sig kvar i Robinson oavsett om laget vinner eller förlorar. Lyckas de inte komma överens hamnar de båda i Gränslandet. Roger, som jobbar som avtalsförhandlare får sitt lag att tro att han kommer att förhandla hem det bästa för gruppen. Så blir det inte – han kommer tillbaka med amuletten. – Det känns bra ändå nu när man har den här. Man kan slappna av på ett bättre sätt och göra tävlingen utan stress i kroppen, säger Roger till laget. Men ju mer han pratar om det desto större irritation skapas hos de andra. Amanda Rivas konstaterar att han inte verkar ha varken dåligt samvete eller vara ledsen. – Och sen kommer jag även på att this is the jerk som fick oss att förlora. Det här känns riktigt ruttet, riktigt illa, säger hon. ”Jag känner nästan för att drämma till honom” Och större ska irritationen visa sig bli. Roger beslutar sig för att sinka sitt lag under tävlingen. Med flit gömmer han en av pusselbitarna kvar i påsen, när det var hans enda uppgift i tävlingen – att få ut alla pusselbitar ur påsen. – Jag bara tittar på honom och bara: ”är du seriös?”, säger Amanda. – Jag känner nästan för att drämma till honom. Det är på den nivån. Hur? Du har ett jobb. Du skulle plocka ut pusslen. Du glömmer en bit. Då kände jag bara, din jävel, fortsätter hon. I synk förklarar Roger att han funderat hela natten på hur han ska lyckas sinka sitt lag utan att det ska synas för mycket. – Inför tävlingen så hade jag smitt mina planer och jag skulle försöka se till att det här laget inte ska vinna. Om de skulle vinna skulle jag vara den första de skulle rösta ut till Gränslandet, säger Roger. – Moraliskt fel, kan man väl säga. Men spel är spel, fortsätter han.

IS på YouTube

MrBeast is a CRIMINAL (Leaked Video)

Everything in this video is alleged. More info to come. MrBeast also has a criminal record, including being arrested for street ...

Rosanna Pansino på YouTube

Surprise Patch Is Best We've Had - Tarkov News & Updates

Airwingmarine's Music - https://spoti.fi/3wq7piR No DMCA, No copyright strikes! Supports the channel and other creators!

Airwingmarine på YouTube

Ubisoft is dying...

With a plunging stock price, delays of games and Stockholder in-fighting...things are NOT looking good for Ubisoft. LIKE and ...

BigfryTV på YouTube

How Strong Is Wally West?

Wassup guys it's Divine Today, we're going to be diving back into the world of DC and taking a look at the fastest man on the ...

OfficialDivine på YouTube

Google Pixel 9 Pro Fold Is So Good! But…

Pixel 9 Pro Fold is my favorite foldable I've ever used and you should totally get one, except... Be the first to get the MKBHD x ...

Marques Brownlee på YouTube

ACDMND$ & Jimmy Pablo - ICE (officiell musikvideo)

Följ mig @itsjimmypablo https://campsite.bio/itsjimmypablo Ute nu på Spotify: ...

Jimmy Pablo på YouTube

Ice HELA FILM | Thrillerfilmer | Med Sam Neill i huvudrollen | Midnattsvisningen

De destruktiva effekterna av den globala uppvärmningen orsakar ofattbar förödelse och panik över hela världen i denna postapokalyptiska katastrof ...

The Midnight Screening på YouTube

Vanilla Ice - Ice Ice Baby (officiell musikvideo)

REMASTERAD I HD! Är detta en stenkall klassiker eller ett one-hit wonder? Lyssna här:...

vanillaiceVEVO på YouTube