Book Review
The Age of AI and Our Human Future

The Introduction
The focus of this review is about reality and how it is changing at lightning speed. We have recently left the age of discovery of AI and are now in the age of AI deployment. This means we are in the most transformative technological period since the years following the invention of the printing press. The difference is that the machine printed book did a lot of good for the world, and whatever bad it did, was only one factor that would put humans in harm’s way, whereas AI can be tremendously beneficial to humans, it can also wipe out our entire species.
This is a good indication that this decade will not be easy sailing.
In November 2022, Microsoft-backed OpenAI released their AI genie, ChatGPT into the world for open experimentation. This was followed by releases of other large scale AI models each month including Stable Diffusion, Whisper and DALL-E2. The public rushed to use the new tech tool to augment writing, take tests, and create strange videos like the one of Harry Potter figures clothed by Balenciaga in leather and gothic-inspired costumes as models in a fashion show from some dark sinister other world. Once again, we are asking the century old question: What is reality?
I learned that New York Times reporter Kevin Roose, in an article on Nov. 21, 2021 used Sudowrite (Open AI’s GPT-3) to generate a review where it said that it is “a bold new book on artificial intelligence that will soon become the go to guide for anyone who wants to understand this transformative technology.” While I do not agree with my robotic reviewer that it is the ‘go to guide’ (neither does Roose), it is a good place to at least understand the right questions we should be asking as this technology catapults our lives into a new world that we frankly do not understand. Like Dorothy and Toto in The Wizard of Oz, “We’re not in Kansas anymore.”
The Authors
I chose a book to review on this topic that I thought key to the theme of this issue of FMJ and for the built environment and the future of our work in it. Two of the authors are clearly experts in the field:
-
Eric Schmidt, former CEO of Google and current chair of the U.S. National Security Commission on Artificial Intelligence,
-
Daniel Huttenlocher, dean of MIT’s Schwartzman College of Computing and
-
Henry Kissinger, diplomat and former U.S. secretary of state
The Story
The first two chapters provide the background information for understanding how reality and non-reality was viewed through the ages in the works by philosophers, scientists, theologians and artists. This is the story of centuries of thinking about reason, faith and reality, which is a complex tale beginning with the Greeks and Romans with rationalism and mythology. It is followed by changes in thought due to Christianity, the Reformation, the Renaissance, Enlightenment, Humanism, Romanticism, Industrialism and Relativity. Throughout these periods, the question remained, did reality have one, true objective form and if so, could human minds access it? Then came the advances in machines in the 20th century “as humans began to approach the limits of their cognitive capacity, they became willing to enlist machines-computers to augment their thinking and transcend those limitations” (p. 49). By the 21st century, we ushered in the new world of cyberspace and corporations used these new tools, like AI and platforms, to accumulate massive wealth and power.
The Technology
It wasn't until the first modern computer was invented in the 1940s that enough technological power was available to explore non-human intelligence. The rest of the 20th century saw the development of the intelligent machine as evidenced by these events:
-
Alan Turing’s 1950 paper on “Computing Machinery and Intelligence” and his test to measure human-like performance by a machine;
-
The 1956 Dartmouth Workshop revealed the definition of AI: “machines that can perform tasks that are characteristic of human intelligence.” This gathering of experts proved to be the catalyst for the “First Wave of AI” identified as handcrafted knowledge where the machine was capable of logical reasoning;
-
The U.S. and Japan created Eliza and WABO T-1 machines;
-
The First ‘AI Winter’ occurred (1974-80) when criticism of AI slowed down funding, and thus research;
-
Followed by an ‘AI Boom’ (1980-87), then another AI Winter (1987-93);
-
In 1998 Google started, and within a few years, Schmidt was hired to make it profitable, which he did by exploiting the power of the free behavioral data collected from the users which created a new type of marketing model.
It's interesting that in the Age of AI, while the authors describe global network platforms (GNPs) as a new type of entity whose scale encompasses populations of users larger than some countries, they question what type of world they are creating. They also say “there is no concern how these virtual solutions might affect the values and behavior of entire societies” because there has not been a basic vocabulary developed or concepts for an informed debate about this technology. In the end, they tell us to act urgently and understand the effect of GNPs at every level from individuals, companies, societies, nations, governments and regions. I question why Schmidt never thought of those issues 20 years ago when he was making millions of dollars exploiting the biggest GNP of them all.
The Situation Today
Machines: the focus now is on the next release of AI, GPT-5 (GPT-4 was just released) and whether there should be another AI winter that lasts for six months or even indefinitely. 30,000 scientists, researchers and experts signed a letter from the Future of Life Institute requesting a six month pause on training AI models more powerful than GPT-4, as they could represent a risk to society and humanity. In response, Eliezer Yudkowsky, research lead at the Machine Intelligence Research Institute (MIRM), wrote that we should completely shut down any work on this technology because we do not really understand the extent of what these systems are capable of and there is no plan or preparations in the event that they have a consciousness which would allow them to perform certain horrendous acts, like killing everyone on earth.
Will the platforms get bigger and more powerful or smaller and more community-based? I bet on the latter. In fact, there are signs that we may be already in a post-platform world.
Humans: This book raised more questions than answers on how humans are going to live with these ultra-intelligent systems, like:
-
Will we use our brains less as we become more dependent on AI?
-
What is our new role: are we partners, or do we become a new species, homo technicus?
-
How will it change the way we work, play and live?
-
Industries which we know are changing include healthcare, medicine, science-based businesses, finance, law, education, military and the environment; what other industries are displaying signs of an impact?
-
What will we do to compensate those humans displaced by AI?
-
How do we ramp up AI skills training?
Governments and Regulations: Italy has a ban on ChatGPT as of this writing and the European Union published their Artificial Intelligence Act 2 years ago after many years of work on AI issues. China does not have their own ChatGPT, but it is being developed by Baidu, Alibaba and ID.com; it does have specific cyberspace regulations. Sweden ruled out a ban. The U.S. passed the National AI Act into law in 2021, headed by the Department of State. Are these efforts sufficient to prevent disaster?
The Costs: Building AI systems requires a tremendous amount of computing power which makes the barrier to entry extremely expensive, in the billions of dollars.
The Conclusion
After reading this book, I am more inclined to think about ideas from two other of my favorite writers, Jaron Lanier and Gillian Tett. Lanier looks back to the 20th century, as he describes the work of the scientists Vannevar Bush (1945) and Ted Nelson (1960), not referenced in “The Age of AI.” They both believed in the concept of the importance of provenance of data, but somehow, perhaps in a rush to get products to market, this was ignored by subsequent technologists. This is key to the concept of data dignity described by Lanier as the importance of context in a digital network that could support a new creative class. It results from cracking open the black box consisting of a “giant ocean of Jello-a vast mathematical mixing” today that the AI genie popped out of and learning more about the models that produced the output AI gives us. This presents a way of thinking, an often ignored fact, that it is humans who created the input that exists in the training of AI models, as all of their contributions were transformed into new text or images. Therefore, if you think of data as labor, there may be some way for people to “get paid for what they create, even when it is filtered and recombined through big models, and tech hubs would earn fees for facilitating things that people want to do.”
This idea of data dignity is associated with a new definition of AI from British journalist Gillian Tett, who writes about AI as anthropological intelligence which embeds our tech tools with culture and context. She has written about the example of real estate and AI, where “robots cannot sense the intangible quality of a building or read the body language of purchasers or sellers.” So perhaps not all that is solid needs to melt into air. We still need humans to determine what reality means to us on earth, at least during this stage in the Age of AI.

Nancy Sanquist, IFMA Fellow, is a professional involved in the built environment for the last few decades. She is the Past Chair of the IFMA Foundation, with which she has worked for the last six years. She is a co-founder of the Global Workforce Initiative (GWI) and the Workplace Evolutionaries, and is the author of many articles and co-editor of books on FM/CRE, technology, architecture, urban planning and maintenance including the award-winning book series titled “Work on the Move (1&2).” She is working on a new book on “Reimagining Place in the 21st Century.”
References
Including Elon Musk, Steve Wozniak, Andrew Yang and Yural Noah Harari.
Eliezer Yudkowsky, “Pausing AI Developments Isn’t Enough. We Need to Shut It All Down”, Time Magazine, March 29, 2023.
Lanier, Jaron. “There Is No AI,” New Yorker (April 20, 2023).
Read more on Technology and Occupancy & Human Factors
Explore All FMJ Topics