Generative Artificial Intelligence (AI) services like Microsoft Security Copilot and Microsoft 365 Copilot offer tremendous value when used appropriately in an organizational context. However, protecting these services from misuse is crucial. Microsoft Entra Conditional Access policies provide an effective way to enforce access controls and safeguard these AI services.
Why Conditional Access for Generative AI Services?
Applying Conditional Access policies to these Generative AI services can be achieved using existing policies targeting:
- All resources for all users.
- Risky users or risky sign-ins.
- Users with insider risks.
This Blog i will demonstrates how to target specific Generative AI services, such as Microsoft Security Copilot and Microsoft 365 Copilot, for tailored policy enforcement.
Step 1: Create Targetable Service Principals Using PowerShell
In this test case, I will demonstrate how to control Microsoft Security Copilot access using an Entra ID Conditional Access (CA) policy. First, let's check the status of Microsoft Security Copilot and the corresponding Service Principal in my tenant.
Security Copilot Status
As you can see, there are no Service Principals available for Security Copilot at the moment. Next, let’s verify this through the Conditional Access Policy creation wizard.
Below are the steps to add these service principals using the Microsoft Graph PowerShell SDK:
Connect with the appropriate scopes to create service principals
Connect-MgGraph -Scopes "Application.ReadWrite.All"
Create service principal for Security Copilot (Microsoft Security Copilot)New-MgServicePrincipal -AppId bb5ffd56-39eb-458c-a53a-775ba21277da
Use the following command to register the Microsoft 365 Copilot Service Principal:
Command:
Register the Service Principal for the Enterprise Copilot Platform (Microsoft 365 Copilot).
New-MgServicePrincipal -AppId fb8d773d-7ef8-4ec0-a117-179f88add510
Step 2: Create Conditional Access Policies
To adopt these services securely, Conditional Access policies should enforce requirements such as:
- Phishing-resistant MFA for all users of Generative AI services.
- Access only from compliant devices when Insider risk is moderate.
- Blocking access when Insider risk is elevated.
Example: Require Phishing-Resistant MFA
Sign in to the Microsoft Entra admin center as a Conditional Access Administrator.Navigate to Protection > Conditional Access > Policies.
Select New policy.
Name the policy (e.g., "Phishing-Resistant MFA-for-Security-Copilot-Access").
Under Assignments > Users or workload identities, include All users and exclude Emergency access accounts.
Under Target resources > Resources (formerly cloud apps) > Include > Select resources, choose:
Security Copilot bb5ffd56-39eb-458c-a53a-775ba21277da
(Microsoft Security Copilot)
Require authentication strength and choose Phishing-resistant MFA from the list.
Select Create.
After validation in report-only mode, switch the Enable policy toggle to On.
Best Practices: User Exclusions
While configuring Conditional Access policies, consider excluding the following accounts to prevent unintended lockouts:
- Emergency access or break-glass accounts: These accounts are critical for recovering access in case of misconfigurations.
- Service accounts and service principals: These accounts are used for non-interactive tasks and should be targeted with Conditional Access for workload identities instead of user-based policies.
For more information, refer to Manage emergency access accounts in Microsoft Entra ID.
Security Copilot Access Testing
After signing in, our phishing-resistant MFA Conditional Access policy will be triggered, prompting the user to set up a phishing-resistant MFA method, such as a passkey, to access Security Copilot.
Note: The following Conditional Access policies apply to security Copilot standalone experiences but do not affect embedded experiences.
Conclusion
By implementing Conditional Access policies for generative AI services, you can enforce advanced security measures such as phishing-resistant MFA, ensuring that only trusted authentication methods are used. These policies allow you to limit access to compliant devices based on risk levels, adding an extra layer of protection. Additionally, they help safeguard your organization from potential misuse of AI services by restricting access to authorized users and devices. Tailoring these policies to your organization’s unique needs maximizes the value of tools like Microsoft Security Copilot and Microsoft 365 Copilot, all while protecting your data and users in an increasingly complex threat landscape.
0 Comments