How Safety Measures are Applied to AI Usage in Nualang.
Services such as OpenAI and DeepSeek have made generative AI readily available as general-purpose tools that offers significant benefits to users in all industries and vocations, including teaching and education. It is available both through direct end user tools of those AI companies and through third party applications, such as Nualang.
As with any new technology, there are benefits and risks. In this case, the benefits are so significant that it would not make sense to opt out of using it, so it behooves us to mitigate the risks. In this article, we'll provide an overview of where and how we use AI currently, what risks we identify, and what steps we take to reduce those risks.
This is currently a very fast changing technology, and we will continue to adapt our approaches as AI technology providers offer new services and change existing ones. We will continue to publish information as we update our approaches.
AI is used in a number of Nualang's services to help users get more done, more quickly. At this point in time, such services are only available to our own content creators and teachers. Students do not have direct access to AI services. The features of Nualang that currently make use of generative AI tools and models are:
Some of the concerns that are often raised when dealing with any third-party services, particularly with AI and machine learning, are:
Nualang respects data privacy and personal information and manages it appropriately. In all parts of Nualang that make use of generative AI, we ensure that we do not use or require any personal information to generate content or evaluate student lessons and other inputs.
Where we use third party services, we only choose services that allow us to explicitly opt out of having any data we send to the service being used to train or improve models.
As noted at the start of this post, Nualang does not provide auto-generated content to students at this point in time; that is subject to further analysis. Content generation is used to help authors, teachers, and reviewers. As such, final editorial control lies with those users, and they make the decision on whether content is or is not appropriate for the intended students, as has always been the case.
Above, we discussed some of the risks with AI and their mitigation. But we can also use AI to help with mitigation, whether the risks come from the AI or from user generated content. For example, we use AI in Nualang to guard against inappropriate content finding its way into lessons or other course content. We use AI to check images that are uploaded in Nualang and to warn if that content is inappropriate.
Nualang places great emphasis on data privacy and safeguarding Personally Identifiable Information (PII) and does not export such data from its servers. In particular, string substitution of personal identifications occurs in all queries sent to our AI code, and the reverse substitution is applied to responses before they are returned to users. Pseudo random placeholder identifications, such as "Student Xyzabc" are used in this process.
Our customers' data is not used to train models, whether by us or by our third-party providers.
In our Generative AI Usage in Nualang article, we provide more detail about AI's usage in our services. To gain a high-level overview of how generative AI works, check out our previous article: Generative AI - How it works.