Artificial intelligence creates a need to address governance, risk and interpretability challenges

Applications of AI


A survey released last August by AI Impacts found that the average respondent believes there is a 5% chance that advanced artificial intelligence (AI) will cause highly negative events such as human extinction. . Before you stock up on canned goods and pasta, let’s take a look at the remaining 95% of the results we have. I hope it’s not so extreme. Nonetheless, these outcomes require risk-based management through the application of a range of interventions and sectors that are educated, aware, and engaged in AI-related opportunities and risks. It makes sense.

Andrew Knight is the Global Data and Technology Lead at RICS

The rise of various types of AI over the past decade or so has accelerated in recent months with the emergence of tools like ChatGPT. So, in addition to complex statistical approaches such as regression analysis, supervised and unsupervised machine learning, and neural networks, add generative AI and large-scale language models to your lexicon of terms to digest and understand. is needed.

AI, in many forms, is becoming more and more pervasive in all areas, including the built and natural environment. Some applications are generic, often very mundane, relatively low-risk, and include use cases like note-taking, transcription, and workflow automation. But with AI being used to power customer service chatbots and assist in the recruitment process, we expose ourselves to a higher risk and potentially It can also expose you to harmful consequences.

Already in our industry AI is used for applications directly related to the development, construction and maintenance of assets throughout their lifecycle, such as cost estimation, benchmarking, scheduling, asset management and using big data that can only be processed with the power of AI. It has been. .

As we look at the different models and approaches that AI can adopt, we face the fundamental problem of covering a wide range of applications. A shift from so-called white boxes, where conclusions and results can be well understood and documented, to a variety of increasingly opaque black box tools. There, internal decisions and the data used to train and drive those decisions are difficult, if not impossible. explain. So we see huge opportunities in the form of AI, varying degrees of risk depending on its use, and many, if not impossible to interpret or explain decision-making processes. We are faced with difficult tools and models.

As professionals, we must continue to make professional judgments about whether a particular AI approach is applicable for a particular purpose. We understand the risks inherent in each application of AI, ensure that all affected stakeholders are aware of AI being used, and understand the nature of the models being used and the models themselves. We need to get as much understanding as possible about the origin of the data we collect.

Professional skepticism continues to be a key skill in ensuring that we employ the right AI for the job and continue to provide rational advice to our clients and other stakeholders. The data used by the AI ​​may be incomplete, wrong, outdated, and possibly malicious, and is provided to damage model performance. You also need to understand the new regulatory issues around patents, intellectual property, copyright, and privacy that ChatGPT and other generating tools are currently raising.

As build environment domain experts, we must play a greater role in AI development, governance, operations, and coordination. You don’t have to become a data scientist and start coding yourself, but you should learn how to work with them, understand the basics of the approaches and tools they use, and become familiar with the statistical language and terminology that many AI approaches. must be use. It is also important to enable properly developed and managed AI to make decisions and outputs without human intervention. Such interventions may reintroduce human biases that have been removed by the application of AI.

Regulators and professional bodies should develop forward-looking education, guidance, standards and regulatory documents that balance the risks and opportunities of AI and emphasize the positive role that built environment professionals can play in the responsible use of AI. You have to play a role in formulating.

  • Andrew Knight is the Global Data and Technology Lead at RICS

Did you like what you read? Click here to receive New Civil Engineer’s daily and weekly newsletters.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *