AI in the Marketplace: Responsible Use FAQs and Recommended Tools for Educators

Applications of AI


“As AI continues to evolve, Jesuit universities have a unique opportunity to demonstrate responsibility and leadership. By fostering ethical leadership, promoting digital literacy, innovating responsibly, human-centering education, and working as a global network, Jesuit institutions can help AI serve the greater good and enhance rather than undermine our mission.”

International Association of Jesuit Universities, July 2025

Professor Marquette established a university-wide artificial intelligence task force to guide the responsible use of generative AI across campus. This multidisciplinary group, comprised of five workgroups, is responsible for identifying where existing policies, procedures, and support structures should be updated or adapted, with a focus on ensuring that AI adoption in the marketplace is responsible, effective, and aligned with the university’s Catholic, Jesuit mission.

While the task force continues its work and prepares recommendations, many members of the market community are already exploring how generative AI fits into education, research, and daily work. The following FAQs provide guidance on some of the most common questions about using AI in the market today and can help faculty navigate the new tools while broader policies and resources are still taking shape.

Mr. Market’s guidance is intentionally enabling, not restrictive. The goal is not to inhibit experimentation or efficiency, but to empower faculty to use AI tools confidently and responsibly.

New site provides guidance on using AI

Information Technology Services has launched a central site to help campus communities use generative AI appropriately and effectively. This site brings together approved tools, tiered data protection, responsible use expectations, informational events, and training resources all in one place.

What do I need to know about privacy, compliance, and responsible use?

Faculty and staff need to treat AI tools the same way they treat university information systems.

  • Protect your data: Use only approved tools for institutional, confidential, and regulatory information. Store university data within a system under market control.
  • Minimize data sharing: Provide only what is needed for the task.
  • Verify accuracy: You may be confident in your AI’s output, but it may be inaccurate. People remain responsible for the accuracy and appropriateness of their content.
  • Prevent harm and bias: Be aware that AI can generate content that is out of context, insensitive, or biased. Do not use AI to automate or shift liability related to harassment, discrimination, or assistance to vulnerable individuals. These require human care.

Additional information regarding general guidelines for the use of AI can be found here.

What AI tools should I use for my university work?

It’s important to remember that not all AI tools are created equal. Some AI platforms use user input to upload data and provide future answers. Sharing sensitive data with AI tools can inadvertently expose data to others outside of your market. To assist faculty and staff, university guidance on the use of AI tools has been established.

For most faculty and staff, the recommended AI tool for use within their organization is Microsoft Copilot. Copilot can be managed within Marquette’s 365 environment, providing privacy and compliance protection for the tool and keeping content in a secure containerized environment that does not train public AI models. The Microsoft Copilot tools are:

  1. Microsoft 365 Copilot Full License: The most robust AI tool offered by universities. This AI tool is fully integrated into Microsoft 365 apps such as Word, Excel, Outlook, and Teams. Department approval is required to purchase annual license fees.
  2. Microsoft Copilot Chat (University Administration): Free version available to all faculty and staff. This tool helps you with questions, drafting, summarizing, and brainstorming. Although Chat doesn’t have direct access to Microsoft 365 apps, you can share content and files directly with Chat to provide context to AI.
  3. Microsoft Copilot Chat (Personal Account): Free version with the same features as university-managed Copilot Chat, but without the data security for university-managed content. Only publicly available data should be used within this application.

Learn more about how to use these tools and all acceptable AI tools currently being reviewed.

Additionally, ITS can be used to assist with the available AI tools and how to use them. In some cases, other AI tools can be helpful, but they are typically only used for low-risk scenarios, such as:

  • brainstorming
  • Improved overall sentence clarity
  • Learn how AI works
  • Handle non-sensitive or non-institutional content.

When is it appropriate to use AI, and when is it not?

AI can be used responsibly in many everyday tasks, especially if the output is reviewed by a human before being publicly shared or trusted. Examples include drafting and revising emails and memos, organizing ideas and notes, summarizing discussions, taking meeting minutes using approved tools, brainstorming, and improving clarity and tone. Additional examples can be found in the Recommendations section of the AI ​​Guidelines.

Be especially careful when accuracy, attribution, and professional judgment are required or when dealing with sensitive or regulated data such as FERPA-protected student data, PCI, HIPAA, research protocols, or employee information. In these situations, choose approved tools, limit the data you share, and carefully verify the output. Additional guidance can be found in the AI ​​Guidelines on Data Sensitivity Levels.

Additionally, when preparing materials for publication or use in proposals, be aware that the appropriateness of using AI may be regulated by external policies. You should review these policies in advance to ensure compliance.

For collaborative projects, discuss the use of AI early in the process to ensure all contributors agree on whether and how AI tools will be used. Establishing expectations upfront supports transparency, consistency, and shared accountability.

AI should not be used to:

  • Will it replace human responsibility or become the creator of records?
  • Store, reuse, or train university data outside of your organization’s control
  • Bypass required policies or compliance processes

Is the use of AI necessary?

No. While many faculty and staff are excited to take advantage of these tools, some do not. The market aims to reduce uncertainty and provide a clear path forward, and does not prescribe one-size-fits-all adoption. Additional information regarding general guidelines for the use of AI can be found here.

What training and resources are available?

  • The Center for Teaching and Learning offers faculty-focused sessions on the effective and responsible use of AI and has developed faculty-specific AI resources.
  • ITS is preparing a series of GROW classes and short AI tips (videos and guides) that will be published on the ITS website.
  • Microsoft Learn: Self-paced learning modules that cover topics such as:

University events and training related to AI can be found here. Content will continue to grow as training support becomes available.

What’s next for AI in the market?

The artificial intelligence landscape is constantly evolving. The AI ​​Task Force is building a living, adaptive institutional approach to AI, ensuring that the guidance is a starting point, not the end.

Expect continued updates on:

  • Supported tools and responsible use guidelines
  • training opportunities
  • Examples of education, research, and management



Source link