Frequently asked questions
Why do I need this?
Public AI tools are like talking about police work in a busy coffee shop where you don’t control who’s listening. GovShield is like having that same conversation inside a locked briefing room where nothing leaves the building.
When someone uses a public AI tool, they’re typing information into a system that’s designed for the general public. The company running it controls the building, the rules, and the records. Even if they say they’re careful, the agency doesn’t own or oversee that environment.
GovShield was built differently. It works more like agency equipment. The information goes in, the system helps with the task, and then it stays inside an environment the agency controls. Nothing gets reused, shared, or remembered.”
Is GovShield CJIS compliant?
GovShield was built to be compliant from the ground up with the founder and lead architect as a law enforcement officer who works closely with the DOJ.
GovShield was developed by an entirely U.S.-based development company, Kinetech Cloud, on a Siemens platform and hosted on Microsoft Azure Government Cloud. All developers completed a background check before beginning.
GovShield was architected to align with CJIS Security Policy requirements while supporting a shared responsibility model. Final CJIS authority remains with the agency, and GovShield provides the technical, contractual, and audit controls necessary to support compliance.
How does GovShield GPT keep PII safe and compliant?
GovShield GPT is designed from the ground up to protect sensitive law enforcement data — including personally identifiable information (PII), criminal justice records, and internal reports — while ensuring full control and compliance for your agency.
Here’s how:
1. Secure Upload & Storage of PII
-
End-to-End Encryption: All data, including PII, is encrypted in transit (TLS 1.2+) and at rest (AES-256) within Azure Government Cloud, a FedRAMP High and CJIS-compliant environment.
-
Isolated Tenant Architecture: Each agency’s data is completely siloed from other tenants using strict role-based access controls (RBAC). No cross-agency access is possible.
-
Audit Logs: Every upload, access, and interaction with PII is logged and timestamped for traceability.
2. Zero Data Sharing With LLMs or Third Parties
-
No Model Training: Any content you upload — including PII, reports, or case notes — is never used to train or improve the underlying AI model.
-
Stateless AI Requests: Each prompt you submit is processed in-memory and then discarded unless explicitly saved by your agency. Nothing is retained or reused by the model provider.
-
No Third-Party Data Access: Your uploads do not leave the secure environment or get shared with vendors, sub-processors, or AI labs for analysis or feedback loops.
3. Agency Data Ownership & Control
-
You Own Your Data: GovShield GPT is a processor, not a controller — all data remains the legal property of your agency.
-
No Retention Without Consent: We don’t store or log prompts or results unless your agency explicitly configures the system to do so.
Can't I just use ChatGPT or Gemini to do the same thing?
⚠️ 1. Data May Be Used for Model Training
-
OpenAI, Google, and Anthropic all state in their Terms of Service that inputs may be used to improve their models, unless:
-
You're on a specific enterprise plan with explicit data retention opt-out
-
You’ve set up isolated environments (rare, expensive, and not default)
-
⚠️ 2. Lack of Data Residency Guarantees
-
Public platforms often process data outside the U.S. or in shared cloud environments, violating:
-
CJIS policies
-
State privacy laws (e.g., CCPA, CPRA)
-
Data localization policies in procurement agreements
-
⚠️ 3. No Audit Logs or Access Controls
-
No ability to:
-
Log which user accessed what data
-
Prevent data from being accessed by foreign nationals (a CJIS requirement)
-
Prove to regulators or courts that the data stayed secure
-
⚠️ 4. End User Responsibility
-
Most public AI tools shift legal liability to you as the user:
-
You certify the data isn’t confidential
-
You accept all consequences of a breach or misuse
-
In a police or government context, this opens the door to public records violations, FOIA/PRA compliance issues, civil liability, and even criminal exposure if PII or victim/witness information is mishandled.
The Risk Is Real
Uploading sensitive information to ChatGPT, Claude, or Gemini without an enterprise license, a signed DPA, and full control over model behavior is equivalent to copying that data into an uncontrolled third-party environment with no enforceable guarantees.
By contrast, GovShield GPT gives agencies:
-
Technical security (CJIS/FedRAMP)
-
Legal defensibility (contractual + operational controls)
-
Peace of mind that their data stays theirs — and stays safe
How are you different than other AI solutions for law enforcement?
- Most importantly, we are affordable!
- We offer quick access without contracts for smaller agencies or agencies that only want to sign up a few users - enter your billing info and get started.
- We are an all-in-one AI solution. You can find providers who offer report writing, case review, and more, but no provider offers as many AI solutions in one singular hub as we do.
Can I create reports in GovShield GPT?
Technically, yes. However, we don't recommend doing so as many watchdog organizations have been very outspoken about the risk to case integrity. Additionally, many legislatures are proposing laws to prohibit or limit this.
Instead, we offer Report Review - you write your report and GovShield GPT provides feedback and suggestions to revise and improve your reports.
