Applying AI in FM Requires Caution
A 2-sided story
"AI is the new electricity" - Andrew Ng
Few technologies speak to the imagination as the way artificial intelligence (AI) does. It holds the promise of fundamental augmentation of human capability. It holds the promise of real smart automation. As AI technology and its implementations evolve and enter society and the economy, it is inevitable that FM professionals’ interest in applying it will grow. Researching the essence of AI also uncovers some of its tricky sides. Fundamentals of AI
Before going into considerations around its application, it is important to know what AI is and its main principles.
The term artificial intelligence was coined at the Dartmouth Summer Research Project on Artificial Intelligence conference in 1956. It started as an attempt to make machines capable of exhibiting human-like intelligence. AI application operates by a Computational Model (CM). These models are specifically designed to perform the real-world tasks they are to be executing. Designing these models is quite a specialized activity. Machine learning (ML) is the science of training these CMs and it is the mainstream technology driving most of the applied AI.
Figure 1: Example of a deep-learning AI model
Deep learning (DL) is a subfield of ML and inspired from brain function. A DL-based CM comprises either single or multiple layers of artificial neurons networks (ANN). In essence ANNs are big arrays of numbers used for calculation where the calculation procedure is inspired from the working of biological neurons. ANN models with multiple layers as in Figure 1 are identified as deep neural network (DNN). DNNs are perceived to be better at embedding more complex relationships.
How AI learns
"The road to smartness is paved with data."
ML is a data-driven process. In speech-to-text AI applications, such models are trained on mammoth amounts of data containing speech and transcribed text.


Figure 2: Generalized depiction of deep learning
Initially the speech data is presented to an infant model, making it generate text as output. This generated text is compared with original text to identify errors, which are used to update the model. Several iterations of response generation and updating model results in the computational model effectively performs the translation. Most of the deep-learning models are trained using similar data-driven and pattern-embedding approaches.
This process reveals two key elements that are at the basis of caution.
First, the datasets as offered drive the learning process. Hence, the data must represent the subject of correctly learning and should not contain incorrect or biased data.
Secondly, the resulting decision algorithm grows during the iterations but cannot be described or explicitly explained: its reasoning cannot be interpreted other than looking at its results.
Cautions around AI today
As the technology is developing at breakneck-speed, applications are released and the lessons from experimenting with them are learned. While much of this is a positive, some of it calls for caution.
On the positive side, AI has fundamentally augmented medicine by allowing doctors to diagnose illnesses better and faster by virtue of AI applications to sift through massive amounts of data (e.g., images), extracting essential conclusions for doctors to use in their diagnoses.
However, there are also fundamental aspects of developing AI applications that give way to caution:
Biased datasets
Overdependency on AI models on training datasets make them susceptible to inherit existing prejudices as in the dataset. An example is found in access management. This task requires assessment of potential threats and opportunities. AI utilized to allow access to the building based on facial recognition could feature bias toward races overrepresented in (criminal) records as offered to the system during its ML process. As a result, the AI used to allow access to the building based on facial recognition can feature bias toward individuals based on external body features alone.
Noninterpretability
Widely used DNN models are essentially black box systems. A DNN provides no explanation on how it arrives at its output. For example, if a speech-to-text model translates "dial 911" as "file 711,” DNN does not provide any reasons for such an error. This aspect of AI makes it unsuitable for several high-stakes business applications. Research in the fields of explainable AI is addressing this noninterpretability. An example of these attempts are deep neural decision trees (DNDT),10 but this technology is in its infant stage.
AI application in FM
What could be the consequences for the application of AI in FM? This is open to a much-needed conversation and debate within the industry.
The business of FM deals with things as well as people. This implies that both can be subject to AI applications as pictured below:
Figure 3: principles of AI applications toward people and things.
AI on Things
Buildings and the systems that operate in them around workplaces essentially behave in deterministic ways. Their response to environmental conditions and the change in them is repeatedly identical. When all conditions are equal, the thing will respond or run in the same way. This deterministic behavior eases the development of AI applications by defining the model as well as in providing the ML datasets to train the model. It is no coincidence that AI applications such as image recognition and, to some extent, language translations are at the forefront of successful AI-driven solutions in the marketplace.
Although not many AI applications on asset management (failure prediction) are on the market, they are expected to emerge. However, their complexity is not to be underestimated. For instance, every HVAC installed system is different because of the architecture of the building. Buildings located in different regions might incur different environmental parameters like temperature and humidity profiles. This will result in different responses of the system despite being from the same manufacturer and type. This dynamic clearly attests to the value of deep learning.
AI on People
Using AI applications related to the behavior of people is much different. Apart from the cautions already discussed, there is another fundamental aspect that must be considered.
Where objects show deterministic behavior, people do not. People continuously learn autonomously from personal experiences (inputs of their environment) and as a result, may change in their preferences over time.
This is a fundamental difference with the behavior of objects and may hamper the process of learning of AI applications. The ML datasets of people's preferences and behaviors are datasets that show historic behavior, based on past circumstances.
When AI applications learn from these past behaviors and preferences and adjust their recommendations or other initiatives accordingly, it creates a bubble phenomenon. The system will provide information or advice based on past behavior and hence put alternatives to the background. Typical examples of this are found with offerings of companies like Netflix and Amazon that provide advice on what movie to view or book to read. When couples open their Netflix main page, the content of both user’s profiles will typically differ based on past individual selections. One could state that this is advantageous because the system supports the user in finding interesting content. To the contrary, it could be said the same system deprives the individual from finding great content that is just a little different from what he/she has chosen thus far. It is highly questionable whether this is a desirable situation in business contexts.
There are privacy concerns To what extent would individuals feel comfortable with the notion of being under incessant supervision and how do these systems comply with privacy regulations like GDPR in Europe?
Taking AI to the Workplace
The workplace community takes high interest in AI technology. However, there is extreme caution to take here.
An example would be an AI application that advises individuals on when to work at what location (desk) at any time. In principle, that provides a powerful application, allowing for individuals to be advised on the use of their preferred workspaces at times that their closest colleagues are near. At the same time, the system would advise when the occupational patterns of the workplace facilities are at their peak.
Many innovations stem from lateral thinking, which is a manner of solving problems using an indirect and creative approach via reasoning that is not immediately obvious. Lateral thinking is often induced by meeting different people from different backgrounds and discussing a topic between them. By combining different views from different backgrounds, new problem-solving approaches may occur.
Now, the risk in AI application is creating a social-workplace-bubble by putting together a group of people that show similar interests and depriving them from bumping into someone completely different. AI deciding on or recommending suitable workplace for people based on previous preferences can potentially create poor performing workplaces.
How damaging would this potentially be to the innovative power of an organization? If the total cost of housing and FM services address no more than 10 percent of the total cost of the organization, and wages are typically tenfold of that number, suboptimization could prove to be quite damaging. There is no data to prove this case right or wrong. However, this risk is worth noting.
A final consideration is around the ethical question of self-determination. The more advice AI applications provides, the more individuals are implicitly directed in making choices that might lead to diminished autonomy, depriving them from creating individual experiences. As FM, HRM and IT functions deal with the working life of people there is probably more to consider than efficiency. Quality of life is closely related to quality of work. It has not yet been determined how AI can best contribute to that.
Views on AI in IT
Gartner states: "AI and the use of ML models to make autonomous decisions raises a new level of concern, with digital ethics driving the need for explainable AI and the assurance that the AI system is operating in an ethical and fair manner. Transparency and traceability are critical elements to support these digital ethics and privacy needs."
In its 2021 report on AI Gartner stated these predictions and recommendations:
-
By 2025, pretrained AI models will be largely concentrated among 1 percent of vendors, making responsible use of AI a societal concern
-
In 2023, 20 percent of successful account takeover attacks will use deep fakes as part of social engineering attacks
-
By 2024, 60 percent of AI providers will include harm/misuse mitigation as a part of their software
-
By 2025, 10 percent of governments will avoid privacy and security concerns by using synthetic populations to train AI
-
By 2025, 75 percent of workplace conversations will be recorded and analyzed for use in adding organizational value and assessing risk
Most of these probable concerns can be addressed taking suitable technological and managerial measures; however, it is imperative for users of AI technologies to keep an open eye toward the probable pitfalls of emerging AI technology.
As FMs deal with both processes as well as the personal interests and privacy of people, the use of AI applications must be mindful of all aspects, both the good and the bad.
Erik Jaspers, IFMA Fellow has more than 40 years of IT experience. For the last 24 years he has led Planon, the leading smart building management software vendor. Having held senior management positions in developing Planon's software solutions, he is currently working on product strategy and innovation policies. He has contributed to multiple publications on IT and FM subjects, including IFMA publications "Work on the Move" (2011, 2016, 2021), "Technology for Facility Managers" (2012, 2017), GEFMA publications “CAFM-Handbuch" (2018) and "BIM in Immobielenbetrieb" (2022). He has authored articles on technology for FM and is a regular speaker at real estate and FM conferences around the world. Jaspers is a member of the IFMA EMEA Board, member of the IFMA IT Community leadership team and special advisor to the IFMA Foundation Board of Trustees.
Gagandeep Singh Saini graduated with a master’s in artificial intelligence in December 2020, at the Radboud University, Nijmegen, The Netherlands. His research was on the use of smart cameras for the detection of parking space occupation, focusing on deep neural networks, a project he performed at Planon. He is now working with Planon, focused on the application options of AI in FM and product offerings.
References
Bibliography
Singh Saini, G. (2020). Occlusion handling in parking space detection system, S1005265 Radboud University Netherlands.
Igor Aizenberg, N. N. (2000). Multi-Valued and Universal Binary Neurons: Theory, Learning and Applications.
Moor, J. (2006). The Dartmouth College artificial intelligence conference: The next fifty years. Ai Magazine.
Gartner (2019). Top 10 Strategic Technology Trends for 2020.>
O'Neil, C, (2016). Weapons of Math Destruction. ISBN:978-0-141-98541-1
Gartner (2021). The future of AI is not as rosy as some might think
(Yang, Yongxin, Morillo, & Hospedales, 2018)Deep Neural Decision Trees
IFMA Foundation (2016), Work on the Move 2.
Read more on Technology and Occupancy & Human Factors or related topics Facility Technology
Explore All FMJ Topics