Artificial Intelligence: Principles, Usage, and Risks View in Kuali Update Delete



Southern Adventist University


Southern Adventist University is committed to ensuring a safe and secure environment for all employees, partners, and customers through the responsible use of generative artificial intelligence* (AI). AI can significantly enhance our mission to provide a faith-based, ethical, and high-quality educational experience by optimizing our workflows, increasing our opportunities to do more of the work we love, and improving work-life balance for many of our employees. Its use must align with ethical and moral principles rooted in our faith, ensuring respect for human dignity, privacy, and fairness. This document outlines the ethical principles, appropriate uses, and key risks for integrating AI into our work consistent with our morals, values, and ethical standards.

Ethical Principles

As in all of our work, our Code of Ethics and university policies apply to work with artificial intelligence, including, but not limited to the following principles:
Honesty. Standing on the side of honesty (1 Cor. 13.6), ethical people are truthful, sincere, forthright, and tactfully candid (Eph. 4.15). They avoid all deceptive practices (Exod. 20.16; Prov. 12.19). They do not cheat, steal, plagiarize, lie, deceive, or act deviously (Exod. 20.15).
Integrity. Keeping their consciences clear (1 Pet. 3.16), ethical people are principled, graciously courageous, honorable, and upright (Prov. 21.29). They act on convictions and conscience (1 Tim. 1.5). They do not place expediency over principle (John 11.50).

Appropriate Uses


Keep in mind that AI is a tool that should be used to enhance human capabilities, not replace or diminish them. Southern promotes AI application in ways that support human flourishing and personal growth.
Research: Understanding unfamiliar topics.
Summarizing: Shortening and synthesizing lengthy public information.
Developing: Speeding up writing, design or development processes.
Decision support: AI can analyze data and offer suggestions to help people make decisions.
Brainstorming: Generating ideas.
Creating Efficiencies: Streamline repetitive administrative tasks and improve accuracy in processes such as admissions, scheduling, and resource allocation.

Data: Collect, process, and analyze data, including examining vast public text-based data for patterns or anomalies.
Editing: Grammarly and other tools can help edit emails and other content; however, it does not replace MUR for public-facing materials.

Key Risks


1. Privacy Concerns: Generative AI can infringe on privacy rights if given access to confidential, sensitive, or non-public information with AI Systems, including passwords, certificates, personally identifiable information, protected data, etc., as systems may use your input for training and companies may access your inputs.
2. Intellectual Property Risks: Using AI may risk intellectual property protection, and generated content may violate others' intellectual property rights.
3. Honesty in Creation: Ensure the use of AI is honest and not deceptive (e.g., copying an artist’s style without permission, using AI in an author-credited publication without citing AI, creating an AI-produced picture with a student’s image).
4. Transparency: Clearly state when AI has been used in content creation, particularly if the work makes up the majority of the end product and is credited with named author(s).
5. Accuracy and Liability: AI outputs may be inaccurate, providing misinformation exposing the university to liability if relied upon without proper review, including independently verifying the accuracy of all information.**
6. Bias and Discrimination: AI may produce biased or inconsistent decisions, risking non-compliance with university policies, ethical standards, and the law.
7. Dehumanization: Used improperly, AI can dehumanize interactions, reduce individuals to mere data points, and poorly replicate people in filling roles that are truly creative and/or personal. AI should enhance human capabilities, not replace or diminish them.

Additional Notes:
• Guidelines for Procurement: Contact Information Technology before acquiring AI products. Vendor management teams will validate products to ensure compliance and avoid duplicate spending.
• Definition of Personally Identifiable Information: The combination of any two pieces of information that could personally identify someone (e.g., a name combined with a phone number).
• Tools: Generative AI tools include ChatGPT, Microsoft CoPilot, Meta’s AI Image Generator, and others; this list is continuously evolving.


**From Guidance to Civil Servants: "These tools can produce credible looking output. They can also offer different responses to the same question if it is posed more than once, and they may derive their answers from sources you would not trust in other contexts. Therefore, be aware of the potential for misinformation from these systems. Always apply the high standards of rigor you would to anything you produce, and reference where you have sourced output from one of these tools."