Droning on from Obama to Trump

The deployment of unmanned aerial drones by the United States military and C.I.A. to execute alleged enemies of the U.S. is unethical, unconstitutional, and undermines its credibility as a source of global leadership, justice, and integrity. These drone missions threaten relations with international allies and must cease if the United States is to remain a respected leader in the global war on terror.

The use of unmanned combat aerial vehicles, known colloquially under the sinister guise of ‘drones,’ has more than tripled under the Obama administration. While no president wants to be responsible for the deaths of American soldiers, the unmanned drones that have replaced on-the-ground troops in many areas are neither capable of informed wartime decision-making nor precise execution. The result is that one in three people killed by a U.S. drone in Pakistan has been a civilian–a staggeringly high percentage of innocent victims from the military leading the so-called fight against terrorism.

This practice is bad for the countries with which the United States collaborates and is equally bad for the U.S. Drone attacks fuel anti-American sentiment and spark distrust for local governments and their law enforcement efforts. Deadly suicide bombings have increased following drone attacks, while the imprecise nature of the execution attempts has made their success questionable at best. Taliban leader Hakimullah Mehsud, for example, was reportedly killed multiple times by drone attacks due to the misinformation and inexactitude that prevails among drone operations.

The use of unmanned drones also marginalizes the role of United States soldiers. In many instances, Special Forces personnel are better equipped to deal with the available intelligence and unconventional war tactics in Pakistan, Afghanistan, Libya, and Iraq. Instead, they are replaced by remote control devices, which can neither react to immediate changes in theater nor make informed judgments during the course of the mission. While soldiers commit their lives to rigorous training and sacrifice, drone missions often rely on brute force, sloppily delivered and resulting in the collateral deaths of innocent civilians.

The fallout from drone executions has marred United States foreign policy among its allies. United Nations special rapporteur on extrajudicial killings, Philip Alston delivered a harsh judgment on drone attacks saying they present a “risk of developing a Playstation mentality to killing.” When it comes to the stress placed on servicemen and women piloting, the New York Times reports that drone pilots suffer the same PTDS and stress-related depression as soldiers on the ground.  Meanwhile, Pakistan, the country in which the majority of attacks are taking place, has publicly objected to the executions despite their complicity.

For his part, president-elect Donald Trump has vowed to continue using drones to execute alleged terrorists. Coupled with the rampant Islamophobia that characterized the rhetoric of his campaign, drone expert Jameel Jaffer fears the outlook is bleak without a significant public backlash against use of combat drones in the Middle East.

Unmanned drone killings are neither ethical nor effective and are jeopardizing the status of the United States as a credible world leader in the global fight against terrorism. In order to salvage the damage done by these machines, the U.S. must significantly reduce its use of unmanned drones in combat roles and begin working with its few remaining allies in the Middle East to rebuild the trust it has lost from the terror its drones have inflicted.

– Tyler

 

 

Automatic weapons and manual transmissions

With the horrifying mass shooting of 103 innocent people in Orlando still very fresh in our minds, gun violence and the rights of gun ownership are virtually all anyone with an opinion can talk about right now. While obviously salient at this juncture (especially given the legislative actions/inaction and epic Congressional sit-in that just took place), I predict that in the coming decades, it’s very possible we will be having a parallel conversation about driving cars.

Let’s look at this way: We are currently at roughly the same place with cars that we were 80 years ago with guns. They are both dangerous, lethal, and a leading cause of death. Yet, they are also a part of everyday life for commuters in the same way farmers, ranchers and sportsmen relied on them heavily (and to some extent still do) for their livelihood. Many have managed to do without either of them, but over time they have both become a part of the American pastime, whether that’s cruising through a windy road on a Sunday afternoon or making a trip to the firing range to hone skills and perhaps let off some steam.

But technology changes the utility of these contrivances. Where guns have become efficient to a point that their killing capacity can no longer be justified for civilian distribution, the development of safer and more efficient driverless cars will likely yield the same challenge for justifying human operation.

Right now we’re still basically in the “pre-driverless car” era. They exist in a nascent state, but problems abound. The New York Times reports engineers are still vexed with teaching driverless cars nuances of the road like potholes and speedbumps. So the vast majority of us are still better drivers than a computer system in a car. Chances are this will change, and the ability for cars to calculate conditions, speed, stopping times, and a host of other quantifiable relevant details combined with mechanical precision will mean a road full of driverless cars will be far safer than the human error that leads to tens of thousands of traffic fatalities each year. Less car accidents sounds great, right?

Given America’s love affair with driving automobiles, it’s going to be a rocky transition to that automated traffic flow if we even ever get there. You see, in the same way Americans believe it’s their right to own and operate killing machines like the AR-15 responsible for most of the deadliest mass shootings of the last five years, the liberty of driving a car is equally ingrained in the American psyche. Even though car accidents from human error amount to over 30,000 unnecessary deaths every year, there are still large segments of the population who enjoy driving far too much to give up the pastime without major pushback. Not unlike those who enjoy a pastime of shooting guns.

What this will mean is hard to say. It’s unclear at this time to what extent technology will advance and perhaps meet in the middle between safety and liberty. Can guns have an electronic killswitch that prevents them from being used for malicious purposes? Can cars assume control of driving only when it’s a real and present threat to the safety of others on the road? Will we need a driver’s lobby for those of us who still like the idea of pleasure driving just like the NRA for those who want to keep shooting their guns?

Perhaps we can reserve manually functioning cars and firearms for controlled environments and ban them in civilian life. Want to drive a car fast and recklessly? Go to a track and use theirs. Want to fire out a couple rounds of an automatic rifle? Go to the range and use theirs. Oh, you want the right to own and operate a deadly machine whenever you want? I guess that’s where the lawyers come in.

-Tyler

A Brighter Futurism

I finally had a chance to listen to Neil Degrasse Tyson interview inventor/futurist/AI soothsayer Ray Kurzweil on the season premiere of Star Talk, and it turned out to be pretty much what I expected: Kurzweil rigidly answered the host’s hypothetical questions with robotic confidence, while guest neuroscientist Gary Marcus questioned the scientific validity of Kurzweil’s predictions and Pulitzer Prize-winning author and professor of cognitive science Douglas Hofstadter compared Kurzweil’s worldview with lunacy and dog excrement intermingled with a few scientifically-sound and reality-based predictions.

I’m not going to comment on the relative accuracy of Kurzweil’s science, but I do think the ethics of AI and nanotechnology are among the most pressing ethical issues of the next twenty years and thus worth exploring even in vague terms here.

Ethics have historically almost always lagged behind technology, so I was especially solaced to hear Dr. Marcus vow to meet these demands as the founder of aiforgood.com. While Kurzweil touts neurocognitive nanobots as a foregone conclusion and a positive step in evolution, he also seems optimistic about the nature of its use in discussing the moral imperative that accompanies it. Dr. Marcus is more measured in his predictions and the barriers that currently exist, taking a critical perspective of Moore’s law (or as Dr. Marcus calls it, Moore’s Trend) explaining the exponential acceleration of technological capability. The truth is that this acceleration has in fact slowed and so even with reference to the initial trend, future paces of advancement might not be easily determined. Dr. Marcus also acknowledges the negative side of this unharnessed technological advancement and stopping short of omens about grey goo, he notes the potential for terrorists, tyrannical governments, and other evildoers and stresses the need for early regulation and ethical codes since technology does often proceed quickly unrestrained.

Perhaps my biggest problem with Kurzweil’s predictions is not the scientific validity of the ideas he so adamantly proposes but his use of “we” to denote those who will be using/benefitting/engaging in “The Singularity” (the forthcoming date—he proposes the year 2045—in which machines will function, reproduce, and blend with human biology). Perhaps he mentions it in his books (which, full disclosure, I have not yet read), but I believe there will continue to be large swaths of the population who have no interest in merging with machine. From uncontacted tribes to the Amish to modern day hippies and naturalists, I don’t foresee the entire human race necessarily jumping aboard this invasive technology.

The other major flaw (albeit not related to ethics) in the predictions espoused in the interview was that of a variation of immortality achieved through creating digital copies of brain scans. Kurzweil describes it as “uploading to ‘the cloud’” but the idea is that the electrical impulses that travel the synapses of the brain can be technologically cloned to operate through computer systems. I don’t have a major problem with this idea except for the use of the word immortality or the notion that anything doesn’t reach a finite conclusion in this universe. Eventually the sun is going to become a white dwarf and engulf the Earth and even if that is somehow avoided, most competing theoretical physics models (including those proposed by host Neil Degrasse Tyson) conclude an eventual demise of the universe. In my view, it’s hard for immortality to endure the end of the universe.

But it does illuminate an interesting question to be explored in the future: if The Singularity does occur as Kurzweil describes, what will we ultimately fear? Death…or unrelenting existence past a natural lifecycle?

– Tyler