By Nicholas West
There seems to be a troubling uptick around “ethics” recently within scientific circles that are focusing on robotics, artificial intelligence, and brain research. I say troubling because embedded within the standard appeals for caution which should appear in science, there also seems to be a tacit admission that things might be quickly spiraling out of control, as we are told of meetings, conventions, and workshops that have the ring of emergency scrambles more than debating society confabs.
Yesterday, Activist Post republished commentary from Harvard which cited a 52-page Stanford study looking into what Artificial Intelligence might look like in the year 2030. That report admits that much of what the general public believes to be science fiction – like pre-crime, for example – is already being implemented or is well on the way to impacting people’s day-to-day lives. We have seen the same call for ethical standards and caution about “killer robots” when, in fact, robots are already killing and injuring humans. Really all that is left to be considered, presumably, is the degree to which these systems should be permitted to become fully autonomous.