Categories
Current awareness pieces Posts

Do conversational AIs enable increased privacy knowledge and improve Privacy by Design at organizations? – Markus Klokhøj & Luigi Bruno

Markus Klokhøj and Luigi Bruno

Markus Klokhøj is a Privacy and Technology Leader at Group Digital at IKEA. His interest in technology and law started at UCPH when he started coding native iOS apps to study for his exams for his LL.M, and also built a tutoring platform for law students. He has won several hackathons, and started the first student driven coding course for law students and has since his studies been working with law and tech in the Danish Data Protection Agency, Deloitte and now IKEA.

https://www.linkedin.com/in/markus-klokhøj-496345119/

 

Luigi Bruno is a Privacy Engineering Leader at Group Digital at IKEA. He has a combined bachelor and master of laws from the University of Bari and an LL.M. from McGill University. He will also soon graduate with an MSc in Computer Science from the University of York. Luigi works with designing and implementing technology solutions for privacy and cybersecurity and in 2022 has taught Cybersecurity for Jurists at McGill’s Faculty of Law, where he is also a doctoral student focusing on the intersection of law and machine learning. 

https://www.linkedin.com/in/luigibruno1/


1. Intro

This month we have Bob, a new colleague starting with us. Bob supports us by answering privacy-related queries. Bob is always available,  friendly, knowledgeable, and never gets stressed. So, is Bob just a colleague from heaven? Probably not. While Bob is awesome, Bob lacks some skills – some key ones. Why? Because Bob is not a human. It is a conversational AI – a chatbot.

artificial intelligence

During the last decade, conversational AIs – or chatbots – have become  common among organizations in attempts to try and automate and streamline tasks ranging from customer satisfaction to shipment tracking and data collection. Chatbots can prove effective in helping to solve simple tasks, such as Q&As, hospitality reservations, and others, for example. However, in many cases, their potential has been overhyped and often leads to organizations being disappointed and consequently having to absorb the additional overhead required to discontinue the adoption of the technology. 

In this light, in many instances, chatbots appear unsuited to effectively carry out complex tasks at the same level as humans. Even considering the rapid development of the technology, organizations should frame the limitations of chatbots to best develop successful implementation use cases, rather than focusing on the wide span of opportunities. Implementing a chatbot to do the wrong job, even with state-of-the-art technology, will inevitably result in additional unnecessary work, as well as costs and frustration.

With this in mind, we have asked ourselves the following question: Is it possible to develop an effective conversational AI that enables easier access to privacy knowledge and that thus results in better privacy by design at our organization? In this blog post, we will present how we have developed a custom chatbot to try and answer this question. To this end, we will also discuss how we measure “enablement” by looking at how the chatbot is used to access privacy knowledge and how that ultimately enables product teams to achieve a higher level of privacy by design.

2. Aim

Developing a chatbot inevitably attracts a healthy amount of scepticism. Many have experienced the disappointment of a “not-so-good” chatbot and have been frustrated by the unmet expectation that the chatbot would be as good as (if not better than) human interaction. In terms of User Experience (UX), a chatbot designed and implemented for the wrong task will inevitably disappoint users. Studies show that when comparing services received by a chatbot to services received by a human, most people prefer the latter (Drift, 2020).

So why would our chatbot be any different and actually be effective? Well, for starters we have a clear scope, and we have developed it to work within a set of well-defined constraints based on the specific needs of the organization and the knowledge of existing and proven processes. Therefore, our aim is to create a conversational AI that can effectively:

  1. only respond to internal queries by engineers, product teams, business stakeholders, and other non-privacy specialists.
  2. within the specific topic of privacy, including GDPR, the internal privacy framework, as well the regulation and laws of countries in which the organization operates, such as China, the UK, and others.
  3. and support existing knowledge management and privacy by design processes.

However, while the scope might fluctuate during the development process, it’s important to not bite off more than one can chew by assuming that the chatbot will be able to cater to everyone with everything. Instead, it’s better to have a well-scoped chatbot that can only serve a specific and framed purpose (like privacy) for a limited customer base.

 

3. Building Bob

To develop Bob we have followed the Software Development Lifecycle (SDLC). During the requirements gathering and analysis phase, it is paramount to decide on what type of chatbot will be developed and implemented. This decision must be made before starting to develop the software as it will fundamentally affect its functionalities. Conversational AI systems can be divided into three different categories: Question Answering Systems (QASs), Social Chatbots, and Focused Dialogue Systems.

QASs are developed to provide direct answers to user queries by learning using various data types such as web pages or knowledge graphs. On the other hand, Social Chatbots, as their name suggests, are developed to provide users with non-topic specific responses that aim to have a high level of emotional intelligence. Finally, Focused Dialogue Systems are usually designed to fulfil narrow and specific tasks such as booking a restaurant table, for instance.

This blog post will only focus on QAS as Bob fits within this category.

3.1 Question answering systems (QAS)

Why would one develop and implement a chatbot to answer questions that colleagues might be able to find within the organisation using various other sources? The main idea behind Bob, and behind QAS in general, is that the bot would be able to provide quicker and more precise information. On a practical level, users will not have to scroll through endless pages of information overload. Instead, the chatbot will provide them with the answer needed by interpreting and responding to natural language queries. 

When it comes to privacy, this essentially means that a privacy QAS like Bob is trained using tabular data that correlates types of data with specific regulatory requirements in each jurisdiction in which the organization operates (see example in Table 1 below).

 

Market Type of data Requirements
Denmark (EU) Colour of the sky (non-personal data) You can process this data.
Denmark (EU) Name (personal data) You cannot process this data unless you meet the requirements of GDPR art. 6.
Denmark (EU) Health information (non-personal data) You cannot process this data unless you meet the requirements of GDPR art. 6 and art. 9.
China (non-EU) Colour of the sky (non-personal data) You can process this data in China.
China (non-EU) Name (personal data) You can process personal data if you meet requirement X.
China (non-EU) Health data (sensitive personal data) You can process sensitive personal data if you meet requirement Y.

Table 1.

This enables the chatbot to learn in a structured way the regulatory requirements for each data element type in each jurisdiction in which the organization operates. Therefore, consequently, by formulating a query such as “Can I collect customers’ names in Denmark for my new application?”, our colleagues working on a new app for the Danish market would be provided with a detailed answer explaining when and how they could lawfully collect and process customers’ names for their app in Denmark.

Clearly, this is just a simple example; the actual training dataset used to train Bob is much larger and is based on the whole corpus of privacy knowledge, as well as on regulations, laws, and cases applicable in different jurisdictions in which the organization operates.

From a more technical standpoint, Bob has been developed using Microsoft Azure QnA Maker service, a cloud-based Natural Language Processing (NLP) service that enables individuals and organisations to develop conversational AIs trained on custom datasets. QnA Maker is often used to find the best answer for any user input from a dataset of information, or a so-called Knowledge Base (KB). Nevertheless, since the framework for the Q&A Maker service is not optimized for interpreting users’ natural language sentences for goals and intents, it has been supplemented with Microsoft Azure LUIS (Language Understanding). The latter helps identify users’ intents by analysing the meaning of their natural language sentences to provide the best possible response to their queries. However, Bob’s natural language understanding still presents some challenges that sometimes hinder the ability of the bot to mimic a conversation between humans. In this light, additional technologies are currently being assessed to improve the flow of conversations and improve the overall UX. Among such technologies, currently, transformer-based language models are being investigated and tested for effectiveness, cost, and applicability to this specific use case.

 

4. How can Bob support?

A privacy chatbot is obviously not a substitute for a privacy advisor but rather a tool to lead non-privacy experts and product teams in the right direction as a part of them respectively assessing the privacy risks related to a digital product under development. Since there is a constant flow of new products being developed, it can be demanding for privacy teams to keep up with the pace of assessing privacy risks when the task requires following from the product development phase to decommissioning phase to ensure that risks are identified, mitigated, and/or treated. In the same way, it can be challenging for product teams to keep track of privacy by design requirements, especially when developing products that will be used by customers across different jurisdictions. Therefore, Bob aims to ease access to get privacy support and serve as a tool for our colleagues to ask the right questions and obtain answers that empower them to become the first line of defence against privacy risks, improve compliance, and better protect customers’ data.

4.1 Privacy by design

One of the requirements when developing new digital products is to embed privacy as a key component t (also known as “Privacy by Design”). The latter is a principle first developed by Canadian privacy regulator and scholar Ann Cavoukian and then incorporated by laws such as the GDPR. Privacy by design requires organizations to develop digital products with privacy being embedded within the design and architecture of software and systems so that privacy is de facto a key characteristic of any digital product being developed and put in production. 

The privacy by design process must be followed by developers and product teams throughout the engineering of digital products that process personal data. Usually, product teams follow privacy by design processes by performing a self-assessment of their products, with the support from privacy engineers. This assessment enables the product teams to understand what privacy risks their products carry, how they can be mitigated or eliminated, and what technical engineering measures must be put in place for the product to be compliant and process personal data in a trustworthy and secure way.

In this light, therefore, Bob aims to reduce the time and effort needed to complete this process by providing comprehensive and actionable information that the product teams can use to complete the self-assessment and to understand better the technical engineering measures that must be put in place to develop their product in line with the privacy by design approach.

More specifically, Bob helps product teams complete the self-assessment more easily and accurately before the assistance of a privacy engineer or engineer is needed. At the same time, Bob operates as a one-stop-shop for product teams to access knowledge related to how privacy is embedded in products, how risks are treated, etc.

 

It is to see that the privacy by design process is an ideal use case for a chatbot as it features a defined process and a strict framework that must be followed by both product teams and privacy engineers.

4.2 General privacy support

The second important aspect on which the chatbot offers support is general privacy-related inquiries. It is important to stress that Bob is effective on simple and straightforward inquiries since, as mentioned, the aim of the chatbot is to be able to provide simple and effective  answers to basic questions to avoid the pitfalls of current conversational AI technologies. This in turn means that if questions become too complicated, the output from the chatbot can seem insufficient, especially in light of the limitations of NLP. However, the chatbot is constantly learning so complex questions enable us to further improve Bob’s ability to cater to the needs of our coworkers and raise the bar to what it can and cannot do.

 

5. What are the results we expect to see?

As for the Privacy by Design process, privacy engineers should have experience answering hands-on questions related to product development, thus providing a good baseline to predict which questions might be asked to the bot in certain situations. This is  an optimal  framework to design and build version 1.0 of a privacy chatbot.

As the chatbot begins to help coworkers with privacy knowledge, collecting feedback and opinions based on their experience is of paramount importance. Doing so will enable Bob’s content, functionality, and  design to be improved according to the needs of the organisation. In this light, comparing feedback rounds to evaluate usage against the investment required to develop, improve, and maintain the bot, enables us to see if the chatbot delivers the expected support and outcome at the budgeted cost. 

 

6. Conclusion

Overall, while conversational AIs – or chatbots –  can prove effective in efficiently helping to solve simple tasks, such as Q&As, hospitality reservations, and others, in many cases, their potential has been overhyped and often leads to organisations wasting time and resources on them, especially when it comes to building them with the aim of solving complex tasks.

Nevertheless, we have taken a different approach to designing and developing a chatbot to maximise our chances of success. First, we have defined a clear scope as to what we wanted our bot to do. Second, we have laid out constraints so that the bot would be tailored to the needs of the organisation and be trained on knowledge sourced from existing processes and practices. This has enabled us to develop a conversational AI that can support coworkers with seamless and stress-free access to contextualised privacy knowledge that ultimately improves Privacy by Design levels and increases coworkers’ confidence in what they know about privacy and how they can use it to develop better digital products.

While building the chatbot, we have been able to conclude that it is important to limit the scope of the functionality of the chatbot rather than getting tempted by the many potential use cases you can think of. Well defined constraints can help carry the process forward and create a product that can enable increased privacy knowledge for the co-workers that get involved in specific privacy processes. 

Ultimately, to measure the success of our conversational AI in relation to privacy support, we have set a quantitative parameter (“usage”) and a qualitative parameter (“feedback”). Over the next year, we will repeatedly evaluate the performance of the chatbot – especially in connection to general privacy advice and Privacy by Design, following the internal process. As mentioned in the intro, the “enablement” or performance of Bob will be assessed by surveying coworkers for feedback, as well as by gathering quantitative data on the usage of the bot and benchmarking it to data relating to the Privacy by Design self-assessment, such as the average time of completion and the time spent per each question. Doing so will lead to effective metrics that will be used to evaluate the overall effectiveness of Bob and decide on improvements and modifications.

 

References

Drift & Heinz Marketing, 2020 State of Conversational Marketing. Available at: https://now.drift.com/state-of-CM-report

Leave a Reply

Your email address will not be published. Required fields are marked *