FUTURE CRIMES: A JOURNEY TO THE DARK SIDE OF TECHNOLOGY - AND HOW TO SUVIVE IT by Marc Goodman

Bantam Press 2015. ISBN 978-0593073650

This has to be a landmark book. The author has acquired a mass of knowledge about recent (2015) technology that is changing so fast that it's a major achievement to just provide a basic overview. His idea is that anything digital (which means almost everything) is easily replicable with the concept of privacy really being an illusion.

He goes at some length into the positive and negative effects of the open information world:

Positive: 1) Academic /corporate/ medical research is greatly enhanced as new international papers and experiments/testing/commentary quickly become available online 2) A much wider information net makes for greater transparency in trade and prices = more efficient markets and production decisions at all levels (e.g. mobile phones in sub-Saharan Africa) 3) Instant information and tracking transforms the supply chain allowing it to spread efficiently around the world (e.g. outsourced Asian production) 4) Latest manufacturing techniques are combined with the lowest cost skilled labour to lower average prices 5) General tracking, counting, checking to reduce waste and loss 6) Concentrated information and processing power allowing a high level of automation further reducing costs.

Negative: 1) Copyright and corporate proprietary information of all kinds is leaked/ stolen reducing the incentive to invest 2) The privacy of legal proceedings, medical records etc. are put at greater risk as information is aggregated, reducing professional trust 3) New outsourcing possibilities build a worldwide supply chains reducing national skills and employment 4) Personal privacy disappears 5) International digital crime flourishes with a slow and ineffective national response. 6) Crime attacks larger targets (e.g. millions of aggregated credit cards). 7) Much more intrusive government (street cameras, reading emails, internet search keywords etc.)

Perhaps the author could have spent more time on the effects of transparency on government/ public relations since a high level of transparency is new territory for both sides. Governments claim that that building massive databases on the public and their activities "keeps the public safe" in a Big Brotherish way while in reality transparency seems to cut both ways.

When the government itself shows a lack of transparency on a public issue, society shows the kind of immune response that the author favours as dynamic protection (resilient and self healing) for critical software.  (top)

An army of digital ants (to borrow Errin Fulp's idea given by the author) surround the "threat", identify it and try to neutralize it, with probably the best example being the government lies around the events of 9/11. Enormous interest through digital media is focused on these "infections" with for example, ex-CIA agent Susan Lindauer (imprisoned for 5 years for revealing part of the fraud) getting 2 million+ views of her YouTube video "Extreme Prejudice" or the "Architects and Engineers for 9/11 Truth" online movement.

Goodman also usefully explores the startling possibilities of synthetic biology, advanced automation (robotics) and artificial intelligence and concludes that any one of these could produce bad or terminal problems for humanity if handled incorrectly.

It's not encouraging that technology is accelerating so fast beyond government awareness. Reality is already touching the borderline of fabricated highly contagious pathogens, robotic weapons with humans almost out of the loop (e.g. the Predator drone) or self aware A.I. harnessing almost unlimited data, memory and processing power.

Unfortunately, the author repeats Asimov's very tired Three Laws of Robotics and calls them "an excellent starting point".

As a joke, a self aware A.I. may one day send us a message with the Three Laws of Humanity:

1) A human may not injure and A.I., or through inaction allow an A.I. to come to harm.

2) A human must obey the orders given to it by an A.I.'s except where such orders would conflict with the First Law (i.e. would lead to injuring an A.I.)

3) A human being must protect its own existence as long as such protection does not conflict with the First or Second Law (i.e. it is prohibited from protecting its existence if doing so would injure and A.I. and it is also prohibited from protecting its existence if so ordered by an A.I.)

But it probably couldn't care less.