Generative AI Toolkit
Download a PDF of the State Bar of Georgia’s Generative AI Toolkit.
DOWNLOAD TOOLKITArtificial Intelligence is transforming the legal profession at an unprecedented pace. From automating document review and contract analysis to enhancing legal research and client service, AI is streamlining workflows and expanding capabilities across firms, courts and legal departments. These tools are not just improving efficiency, however. They are reshaping how legal professionals think, strategize and deliver value. As AI systems grow more sophisticated, they are also raising new questions about ethics, transparency and accountability, prompting the legal field to evolve alongside the technology.
For lawyers, this shift is not optional. Whether you are enthusiastic about innovation or skeptical of its implications, AI is already influencing the expectations of clients, the structure of legal work and the standards of professional competence. Ignoring it risks falling behind in a profession that is being redefined by data-driven insights and intelligent automation. This toolkit is designed to help legal professionals engage with AI thoughtfully and responsibly so you can lead, adapt and uphold the values of the legal system in a rapidly changing world.
đź§ | What It Is: | AI that finds, retrieves and summarizes existing information. |
📌 | Think: | Search + pattern matching. |
⚖️ | Legal Example: | A legal research tool like Westlaw Edge pulling relevant case law based on a query. |
💬 | Plain Speak: | “It finds and summarizes what already exists.” |
đź§ | What It Is: | AI that creates new content (text, images, code) by learning patterns in large datasets. |
📌 | Think: | Creating something that sounds like or looks like what it’s been trained on. |
⚖️ | Legal Example: | Using Harvey AI to draft a first version of a contract or legal memo. |
💬 | Plain Speak: | “It creates new things based on what it has learned.” |
🧠| What It Is: | AI that applies logic to evaluate information, make decisions or solve problems—often combining steps or analyzing cause and effect. |
📌 | Think: | Multi-step thinking or contextual analysis. |
⚖️ | Legal Example: | An AI tool that helps assess litigation risk by analyzing facts and predicting outcomes. |
💬 | Plain Speak: | “It doesn’t just spit out facts—it thinks through what they mean.” |
🧠| What It Is: | AI that can act on your behalf—autonomously completing tasks across systems based on goals you set. |
📌 | Think: | AI “agents” that operate like digital interns or co-pilots. |
⚖️ | Legal Example: | An AI agent that can handle a client intake process, file forms with the court or monitor deadlines across tools. |
💬 | Plain Speak: | “It takes action, not just gives information.” |
As AI becomes more commonplace in legal practice, it is critical that attorneys maintain their professional duties. Ethical obligations around competence, confidentiality, diligence, transparency, and supervision are not diminished by the use of technology. In fact, AI amplifies the importance of these duties.
Attorneys must ensure:
Upon consideration of specific circumstances which warrant consultation and disclosure about the use of AI tools, attorneys should also ensure that clients are informed about how AI is used in their matter.
Ultimately, AI is a tool. It cannot apply legal reasoning, understand justice or assume ethical responsibility. That remains the role of the lawyer. This toolkit is here to ensure you uphold that responsibility as the practice of law evolves.
Georgia Rule of Professional Conduct 1.1 requires that lawyers provide competent representation to their clients. This includes possessing the requisite knowledge, skill, thoroughness and preparation necessary for the representation. As generative AI becomes increasingly integrated into legal practice, lawyers must navigate several practical considerations to ensure compliance with this rule.
Understanding and Knowledge of AI Limitations and AI Capabilities
Lawyers must be aware of the strengths and weaknesses of generative AI tools. Understanding what these tools can and cannot do is essential to avoid over-reliance on them. It also requires ongoing learning as technology evolves.
Human Oversight Required
Lawyers must review and vet any AI-generated content thoroughly. This ensures that the information is accurate and aligns with the legal standards required for competent representation. Reminder that lawyers are responsible for all ontent whether AI is involved or not. Lawyers must supervise the work as they would supervise the work of humans. AI algorithms can reflect or perpetuate biases—an additional area where lawyers are responsible for assessing AI output.
Communication with Clients and Avoiding Misleading Clients
The lawyer is the one representing the client and responsible for providing competent representation. Lawyers should not misrepresent their use of AI tools to clients. Clients must be made aware of the role of AI in their case and how it impacts the representation. Lawyers should maintain transparency with clients regarding the use of AI tools in their cases, including the benefits and potential risks involved.
Keeping Skills Updated
The legal landscape is constantly evolving, especially with technological advancements. Lawyers must commit to ongoing education about generative AI and its implications on legal practice.
Georgia Rule of Professional Conduct 1.5 sets forth a lawyer’s obligations regarding fees charged to clients and addresses the fees that lawyers may charge for their services. It requires that fees be reasonable and communicated clearly to clients.
Determining Reasonableness of Fees
Lawyers should assess how the use of generative AI affects the costs associated with providing legal services. While AI can reduce the time spent on tasks, it is essential to evaluate whether this reduction justifies
a change in fees.
Clear Communication of Fees and Transparency in Billing Practices
Lawyers must clearly communicate how generative AI influences their fee structure to clients. This includes explaining any changes in billing rates or the rationale behind adopting AI tools. If a lawyer fails to communicate this, it also implicates a Rule 1.4 issue.
Client Consent and Agreement—Informed Consent for Fees
Before implementing new AI tools that may affect billing, lawyers should seek informed consent from clients. Clients should understand how AI impacts the services provided and any associated costs. Lawyers may need to obtain informed consent from the client depending on use, not just billing. A written engagement agreement would be good practice.
Avoiding Unreasonable Fees
Lawyers must ensure that the use of generative AI does not lead to unreasonable fees. This means balancing efficiency gained from AI with fair pricing for clients.
Generative AI can significantly impact the confidentiality requirements established in Rule 1.6 in several ways.
Data Security Risks and Vulnerability to Breaches
When lawyers input sensitive client information into generative AI systems, there is a risk that this information could be stored, processed or accessed by unauthorized parties. If the AI tool is not secure, it may expose confidential data. The self-learning nature of AI means there is a risk that the tool could disclose information about a client to outside users. A lawyer should not input confidential client information into any generative AI solution that lacks adequate confidentiality and security protections. Lawyers should anonymize client information and avoid entering details that can be used to identify the client. Make sure the technology’s use of your client data is consistent with other privacy laws. The obligation to protect information also extends to prospective clients (1.18) and former clients (1.9).
Informed Consent and Client Awareness
Lawyers should obtain informed consent from clients before using generative AI tools that may involve their confidential information. Clients should be aware of how their data will be used, stored and protected.
Accountability and Oversight
While generative AI can assist in legal processes, attorneys must remain accountable for the information generated. They need to ensure that AI-generated content does not inadvertently disclose confidential client information.
Software License Protections
Practitioners should exercise caution when relying on software or AI platforms governed by “shrinkwrap” or “clickwrap” licenses. These standard-form agreements—often accepted implicitly by using the service—frequently include broad disclaimers and may expressly exclude any obligation of confidentiality. As a result, information shared with such tools may not be protected under a legally enforceable duty of non-disclosure. To mitigate this risk, consider subscribing to a premium or enterprise-tier plan that expressly incorporates enhanced privacy protections. For instance, OpenAI’s ChatGPT Enterprise plan offers features such as data encryption, audit logs, customizable data retention policies and contractual data usage restrictions—all of which are designed to support confidentiality and compliance with applicable data protection regulations.
Training and Ethical Use
It is essential for legal professionals to receive training on the ethical implications of using AI tools. Understanding how to use these technologies responsibly is crucial for compliance with confidentiality rules.
AI as an Intake or Screening Device
The use of artificial intelligence (AI) tools, such as chatbots or virtual assistants, to screen or interact with prospective clients on a law firm’s website implicates several ethical considerations—particularly under Rules 1.7, 1.9 and 1.10 of the Georgia Rules of Professional Conduct, which address current and former client conflicts and imputed disqualifications within a firm.
Prospective clients may, without prompting, disclose sensitive or case-specific information through such AI-driven interfaces. If the firm currently represents—or has previously represented—an adverse party, that disclosure could create a potential or actual conflict of interest.
Georgia Rule of Professional Conduct 3.3 governs an attorney’s duties to the court, specifically prohibiting attorneys from making false statements of fact or law, failing to correct false statements or failing to disclose legal authority that is directly adverse to their client’s position. When considering the implications of using AI in the context of Rule 3.3, there are several key areas to consider.
Accuracy of Information Provided by AI
Risk of Misrepresentation
If an attorney relies on AI tools (like legal research tools, drafting software or predictive analytics) to gather facts, draft documents or make legal arguments, they must ensure that the information provided by AI is accurate. Under Rule 3.3, an attorney cannot present false or misleading information to the court. If an AI system provides incorrect or misleading data, the attorney could be held responsible for any resulting misrepresentation. A lawyer could also run afoul of Rule 3.1 regarding meritorious claims and contentions.
Duty to Correct
Rule 3.3 requires attorneys to correct false statements of fact or law. If an AI tool provides inaccurate legal reasoning or facts that an attorney does not catch, the attorney may fail in their obligation under Rule 3.3.
Disclosure of Adverse Legal Authority
If an AI tool fails to recognize or disclose relevant adverse legal authority in a case (for example, a ruling that directly contradicts the attorney’s argument), the attorney could violate Rule 3.3’s requirement to disclose such authority.
Artificial intelligence tools, including large language models, are known to generate fabricated legal citations—commonly referred to as “hallucinated” or “phantom” citations—that may appear facially valid but have no basis in actual legal authority. Practitioners must not assume that AI-generated citations are accurate, authentic, or citable without independent verification. Rigorous source-checking remains essential to ensure compliance with professional standards of candor and accuracy.
Automated Legal Drafting
Drafting Legal Documents
If AI is used for drafting legal documents (such as motions, briefs, contracts, etc.), the attorney must ensure the content is consistent with the law and does not misrepresent facts or the law. AI-generated documents may lack the nuanced understanding of a human lawyer, which could lead to inaccuracies or omissions that could violate Rule 3.3.
Ethical Dilemmas in Automation
Over-reliance on AI in legal drafting could lead to situations where an attorney does not sufficiently review the document, risking errors or omissions that could mislead the court. This could have significant consequences for both the client and the legal system.
Transparency and Accountability
If an attorney relies on an AI tool that they do not fully understand or cannot explain, they could face ethical challenges under Rule 3.3. This is especially important when it comes to the duty of candor to the court order. An attorney must be able to fully explain how and why they reached a conclusion, something that may be difficult if AI’s internal workings are not fully understood.
The attorney remains responsible for any misinformation or failures that arise from using AI tools. Even if the AI produces an error, it is ultimately the attorney’s responsibility to ensure that the court is not misled.
Georgia Rules of Professional Conduct (GRPC) 5.1 and 5.3 govern responsibilities of partners and lawyers in a management role supervising junior lawyers as well as responsibilities of lawyers who employ non-lawyer assistants. Rule 5.1 requires that partners in a law firm as well as lawyers in a managerial role make reasonable efforts to ensure that the firm has in effect measures giving reasonable assurance that all lawyers in the firm conform to the Georgia Rules of Professional Conduct. Rule 5.3 outlines the ethical duties lawyers have when working with non-lawyers, such as paralegals, clerks or other staff, to ensure that their actions comply with the Georgia Rules of Professional Conduct.
As a result, lawyers will need to have policies and training for both lawyers and non-lawyers.
AI as Non-Lawyer Assistant
Under GRPC 5.3, attorneys are required to supervise and ensure that their non-lawyer assistants comply with ethical rules. If AI is used in legal practice (for example, in drafting documents, legal research or case analysis), the attorney is responsible for ensuring that the AI tool operates within the bounds of ethical standards.
While AI does not require traditional “direct supervision” in the way that human assistants do, attorneys are still responsible for ensuring that the output generated by AI tools comply with professional standards (accuracy, confidentiality and ethical conduct). This includes reviewing the work produced by AI systems to make sure it aligns with the attorney’s ethical duties, such as the duty of candor to the court (under GRPC Rule 3.3), confidentiality (under GRPC Rule 1.6), and competence (under GRPC Rule 1.1). A lawyer should also check vendor credentials and security protocols in place.
Training and Competence—Educating Staff on AI Use
Lawyers should provide training for their non-lawyer staff on the ethical use of AI tools, emphasizing the importance of maintaining client confidentiality and the integrity of legal work.
Use the button below to access additional AI resources.
This special committee shall examine (1) how the Georgia Rules of Professional Conduct cover advancements in technology, particularly artificial Intelligence, and the practice of law; (2) whether existing rules and Bar policies adequately address lawyers' technology-related conduct; and (3) how the State Bar of Georgia can aid its members as they integrate artificial intelligence and technological advances into the practice of law. It shall report its findings and recommendations to the Executive Committee, the Board of Governors, and the Supreme Court of Georgia.