AI in FM
What you need to know

It is hard to avoid the deluge of product news and advertisements espousing artificial intelligence (AI) these days. All products incorporate the word “smart” somewhere in their marketing, if not in the product name itself. However, it is important to detect facts from the hype in this advanced technological landscape with basic definitions for AI and some terminology being used.
Note that some of the information presented was generated by AI technology and will be identified using italics in the text and is presented exactly as produced by the generative AI system. The intent is to provide a foundation for what AI is, and is not, when exploring the application of AI within the work environment.
A brief history of AI
The term “artificial intelligence” was first used by John McCarthy in 1961. Work on natural language processing and its first use in robotics occurred in the 1960s. Then AI entered what was termed the “AI Winter” from the early 1970s to the late 1980s, with little investment or advancements in the field during this period.
A resurgence occurred in the 1990s with IBM’s Deep Blue beating world chess champion Gary Kasparov. The 2000s saw advancements with early self-driving vehicles, Apple’s Siri virtual assistant and IBM’s Watson winning the “Jeopardy!” game show. The 2010s witnessed further developments in self-driving vehicles, and OpenAI’s GPT-2 language model was released near the end of the decade.
The 2020s have exploded with advancements from OpenAI’s GPT-3 large language model, the DALL-E system that creates original images from a textual description, the MidJourney system that creates image artwork from a textual description and the ChatGPT generative AI program.
AI defined
The definition of Artificial Intelligence (AI) is a field of study that seeks to create machines that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.
A generally accepted authoritative definition of AI is that it is the simulation of human intelligence in machines that are designed to think and act like humans. AI systems are trained to perform specific tasks by being exposed to large amounts of data and using algorithms to identify patterns and make decisions. The goal of AI research is to create systems that can perform tasks that would normally require human intelligence, such as recognizing objects in images or understanding natural language.
There are many different approaches to AI, including machine learning, deep learning, neural networks, and expert systems, and the definition of AI continues to evolve as the field develops. However, at its core, AI is about creating systems that can perform tasks that would normally require human intelligence, and the definition is centered around the idea of creating machines that can think and act like humans.
The preceding definition was produced by the ChatGPT generative AI bot produced by OpenAI and presented word for word as generated by the bot. A search of other authoritative AI sources supports this definition as written by the bot.
A vital element of this definition of AI is the statement, “the simulation of human intelligence in machines that are designed to think and act like humans.” Keep in mind that there is a key distinction between thinking and acting like humans and generating an intelligent and grammatically correct response to a question.
AI or machine learning
Some products will use the AI phrase in their literature, but a closer examination reveals capabilities more aligned to machine learning (ML). This is not technically incorrect, as ML is a subset of the broader AI umbrella. So, what is machine learning?
Artificial Intelligence (AI) and Machine Learning (ML) are related but distinct concepts.
Artificial Intelligence refers to the simulation of human intelligence in machines that are designed to think and act like humans. AI systems can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. AI systems can be developed using various techniques, including rule-based systems, expert systems, and machine learning.
Machine Learning, on the other hand, is a subfield of AI that deals with the development of algorithms that allow machines to learn from data and improve their performance on a specific task over time, without being explicitly programmed. Machine learning algorithms use statistical methods to enable systems to make predictions or decisions by finding patterns in data. The algorithms are trained using large amounts of data, and the more data they are exposed to, the more accurate they become.
In other words, Machine Learning is a means to achieve Artificial Intelligence, as it provides the algorithms that allow AI systems to learn from data and improve their performance. However, not all AI systems use machine learning, as other techniques, such as rule-based systems or expert systems can also be used to develop AI systems.
In short, AI is the broader concept of machines being able to carry out tasks that would normally require human intelligence, while Machine Learning is a specific approach to achieving AI that involves training algorithms to make predictions or decisions by learning from data.
In facility management operations and maintenance tools, there are many practical applications for machine learning.
How do these AI mechanisms work?
Different types of AI mechanisms perform different functions, but the one getting the most attention in the media is called generative AI and is based on the large language model (LLM). An LLM is a large data set used to train an AI mechanism. ChatGPT was given the prompt, “How does ChatGPT Work?” The following is the response it returned,
ChatGPT is a conversational AI model developed by OpenAI. It uses a deep learning technique called transformers, which was introduced in 2017 by Vaswani et al. The model is trained on a massive dataset of text and can generate human-like responses to questions, hold conversations, and complete sentences or paragraphs based on the context.
The basic building block of the transformer architecture is the self-attention mechanism, which allows the model to weigh the importance of different words in the input when making predictions. The model is trained using a variant of the Transformer architecture called the GPT-3 architecture. It has over 175 billion parameters, making it one of the largest language models ever created.
When a user inputs a prompt, ChatGPT uses its self-attention mechanism to analyze the context of the prompt and generate a response. This response is generated by sampling from the probability distribution over the vocabulary of the model, given the context of the prompt. The model is fine-tuned to produce more accurate and relevant responses by adjusting the weights in the neural network based on the input-output pairs it is trained on.
In summary, ChatGPT works by using the transformer architecture and self-attention mechanism to analyze the context of the input and generate a response by sampling from a probability distribution over the vocabulary of the model.
ChatGPT effectively uses statistical analysis to select words to respond to the user’s question, which is not the same as understanding the contextual meaning of the generative response. While an impressive feat, this is an important distinction to remember when viewing responses from generative AI systems. Minor changes to the phrasing of the user’s prompt may return a very different response.
OpenAI’s website lists the following ChatGPT limitations:
- ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.
- ChatGPT is sensitive to tweaks to the input phrasing or attempting the same prompt multiple times. For example, given one phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly.
- The model is often excessively verbose and overuses certain phrases, such as restating that it’s a language model trained by OpenAI. These issues arise from biases in the training data.
- Ideally, the model would ask clarifying questions when the user provided an ambiguous query. Instead, our current models usually guess what the user intended.
- While there are efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior.
It should also be noted that the LLM training data has a cutoff point. In the current iteration of ChatGPT, that would be early 2022. Therefore, information or “factual details” occurring after the cutoff point will not be incorporated into its responses.
Does this mean that data provided by ChatGPT cannot be trusted? Not necessarily, but the recommendation to verify using a known authoritative source is a common statement appearing at the end of many ChatGPT responses.
In one sample prompt given ChatGPT regarding global carbon emissions attributable to commercial and industrial sources, the response provided an emissions figure indicating 40 percent. This fact was true at the cutoff point for the LLM data, but more recent data indicates a value closer to 37 percent.
On a related note, one research item indicated that up to 43 percent of professionals had used generative AI content in their work product. While it is unknown how much of this generative AI content contains factual errors or outdated information, it is recommended to perform additional due diligence if using it to supplement your professional knowledge and experience.
Text-to-image AI
Another branch of AI includes tools that generate photo-realistic or art images from a text description (prompt). Two such systems are MidJourney, which generates remarkable artistic images, and DALL-E, which can render photo-realistic images. Experimenting with both platforms produced an array of interesting image outputs demonstrating how minor text prompt variations would yield wildly different results. The examples below focus on an identical text-to-image prompt fed to both systems and the types of images they produced.
Midjourney example
Midjourney was given a text prompt to generate an image of “one person photo indoors studying blueprint drawing.” Note that there are guidelines for formulating the text prompt, some of which are reflected in the quoted prompt above.
While this is an impressive artistic rendering, note that the system produced the drawing using a Caucasian male as the subject, whereas the text prompt only specified “one person.” This is an example of response bias possible in generative or text-to-image AI. Still, this type of rendering could be very useful if there is a need for an image with specific content elements, and a professional artist is not readily available.
DALL-E example
OpenAI also produces the DALL-E system. It is like Midjourney, but can produce photo-realistic images. Tools of this type can produce deepfake images capable of fooling all but digital imaging experts. The media have run many stories warning of the potential abuse of such powerful tools to mislead the public.
How good are these images? DALL-E was given the same text prompt to generate an image of “one person photo indoors studying blueprint drawing.”
This result does not use the DALL-E system's full rendering capabilities, but it still takes a very close examination to determine that this is not an actual photo. This image also exhibits bias. While it produced male and female subjects, they were rendered in ethnicity that was not part of the text prompt.
What are the prospects for AI in FM?
Commercial real estate is rife with opportunities for artificial intelligence and machine learning, many of which directly benefit FM.
Fault Detection & Diagnostics (FDD)
FDD tools utilize input from various sensors and information sources to detect potential problems with building systems and attempt to isolate the root cause of a given problem for routing to the appropriate maintenance team. This will involve machine learning mechanisms in diagnostics that can identify one or more probable causes for a given issue. This allows the FDD system to generate a work order for the suspected faulty system or asset, along with suggested corrective actions. Advanced FDD capabilities include initiating automated responses (corrective actions) when a control sequence change may return the detected problem to its normal operating state.
Occupant Engagement
Innovative building control systems can now identify individual occupants and initiate actions on their behalf. One state-of-the-art building is capable of:
- Identifying a building occupant as their car approaches the parking garage by reading the license plate. The parking gate opens, allowing the car to enter.
- The system sends the elevator to the occupant’s parking level, so it is waiting for them upon exiting their vehicle.
- The system turns on the lights and sets the preferred temperature in the occupant’s office.
- Finally, the system requests the office’s automated coffee station to brew the occupant’s preferred latte, so it is waiting for them as they exit the elevator.
These coordinated actions would not be possible without the appropriate sensors and machine learning technologies to interact with discrete building systems.
Maintenance management
With the volume of maintenance activities occurring within a building or across a portfolio, it is easy for the FM to miss signs of impending problems within a given building system. AI and ML can be trained to scan properly coded maintenance records for trending problems and provide appropriate alerts. With fewer FM and maintenance resources, telemetry and technologies will be necessary to supplement the lack of eyes and ears in the field.
What does this mean for FM?
While the FM is not expected to be an AI subject matter expert, learning enough about the subject to separate hype from the practical applications is highly recommended. Most AI & ML systems require massive amounts of data to learn from and to function. That data must be complete, accurate and timely. Feeding bad data to any AI mechanism typically will deliver equally bad responses – or worse.
Remember, product demonstrations (almost) always work and are designed to impress their audience. Be prepared to ask probing questions about how products that claim AI features work and their prerequisites to function effectively in operation. Then ask if the group or organization is prepared to deliver those prerequisites. If not, stepping back and working on those processes and data quality may be a good idea before diving into AI technology.

References
Infographics & Visualizations
Infographic: Generative AI, Explained by AI
Decoding Google's AI Ambitions (and Anxiety) (visualcapitalist.com)
Media Reviews & Opinion Articles
ChatGPT: Everything you need to know (wired.com)
Down the Chatbot Rabbit Hole (wired.com)
The Race to Build a ChatGPT-Powered Search Engine | WIRED
The Chatbot Search Wars Have Begun | WIRED
Remember Bing? With ChatGPT's Help, Microsoft Is Coming for Google Search - CNET
How AI Will Transform Project Management (hbr.org)
Study finds more workers using ChatGPT without telling their bosses | TechSpot
Legal Considerations
AI-generated comic artwork loses US Copyright protection | Ars Technica
Generative AI is a legal minefield (axios.com)
ChatGPT Is Making Universities Rethink Plagiarism | WIRED
General AI Information & News
AI Magazine - AI Industry News
Magazine | (analyticsinsight.net)
Artificial intelligence | MIT News | Massachusetts Institute of Technology
Artificial Intelligence Latest News and Features | WIRED UK
DATAVERSITY - Data Education for Business and IT Professionals
AI News - Artificial Intelligence News (artificialintelligence-news.com)
Read more on Operations & Maintenance , Technology and Emerging Topics
Explore All FMJ Topics