Menu
FundsSyndicatesPortfolioContentAbout UsEntrepreneurs

Webinar

AI: A Treasure Trove of Legal Fees?

Wednesday, March 20th 2024, 3:00 PM ET

FFAI Treasure Trove panel

Discover the evolving frontier of artificial intelligence and its implications on the legal landscape with our on-demand webinar. As AI technologies weave their way into the fabric of businesses across sectors, they bring with them a complex web of legal challenges and opportunities.

See video policy below.

Post-Webinar Summary

The panel discussion focused on the legal challenges and opportunities presented by AI technology in various sectors. The panelists emphasized that while AI can create new business models and services, it also raises issues of user privacy, IP liability, and malpractice. They discussed the importance of understanding the legal implications of AI and the need for lawyers to stay updated with market knowledge on AI. The panel also touched on the potential for defamation liability when an AI program produces false information and the interplay between AI and strict product liability doctrine. They concluded by highlighting the role of AI in automating tasks in law firms, such as document review and legal research.

READ FULL TRANSCRIPT HERE

This intricate landscape is poised to trigger an exponential increase in litigation, redefining the role of legal counsel and strategies for navigating this new terrain. Join our esteemed speakers from Alumni Ventures: AI Fund Managing Partner Ed Tsai, Senior Principal Sophia Zhao, and Deputy General Counsel Christopher Browne along with an expert panel, as they delve into the nuances of AI’s impact on legal practices and what it means for businesses and legal professionals alike.

Through their expert lens, attendees will gain insights into the burgeoning realm of AI litigation and its potential to reshape the legal industry.

Watch above now to unlock the strategic advantages of understanding AI’s legal implications.

Why you should watch:
  • Gain firsthand insights from leading experts in AI and legal practice.
  • Understand the potential legal challenges and opportunities presented by the rapid integration of AI in various industries.
  • Equip yourself with knowledge to navigate the evolving legal landscape influenced by AI technologies.

Alumni Ventures is America’s largest venture capital firm for individual investors.

Note: You must be accredited to invest in venture capital. Important disclosure information can be found at av-funds.com/disclosures. 

Frequently Asked Questions

FAQ
  • Where is the transcript for this webinar?

    Speaker 1:
    Good morning and afternoon, everyone. Hope you’re doing well. As AI technology weaves its way into the fabric of businesses across sectors, it also brings with it a complex web of legal challenges and opportunities. We’re looking forward to a lively discussion among our esteemed panel of law friends today.

    Before we get started, this presentation is for informational purposes only and is not an offer to buy or sell securities, which are only made pursuant to the formal offering documents for the fund. Please review important disclosures in the materials provided for the webinar, which you can access at www.abfunds.com/disclosures.

    While this webinar will deal with legal issues and many of the speakers are lawyers, it’s important to note that none of the presenters’ remarks constitute legal advice. Please know you’ll be on mute for the entire presentation, and this webinar is recorded and will be shared after the event. We encourage you to submit questions throughout the webinar, and we’ll try to answer your questions during the Q&A session.

    Thank you very much, and I’ll invite Ed Tsai, our Managing Partner of AI Fund, to walk you through the next few slides.

     

    Speaker 2:
    Thank you, Sophia. I’m pleased to present today together with my colleagues and special guests on the topic of “AI: A Treasure Trove of Legal Fees.”

    The interesting thing about this webinar is that it’s not just about legal tech or AI in legal practice—it’s about the opportunities that AI brings to lawyers and legal firms in general.

    We’ll start with a brief intro on Alumni Ventures and our AI Fund, then move right into our panel discussion, followed by a Q&A.

    There are two big opportunities here:

    1. AI is creating new business models and services. These will require new legal guidelines and protections. Issues like user privacy, IP liability, and malpractice are emerging because of AI. Recently, as many of you know, there was a lawsuit against Air Canada when their customer service AI bot hallucinated a wrong policy. The courts ordered that they had to pay $800 to the customer because of it.

    2. As top lawyers, you’ll have many AI companies wanting to be your clients. The question is, how can you differentiate yourself by building a track record of successful clients? Market knowledge of AI is very important as you select clients and provide them with advice.

    Speaker 2:
    Alumni Ventures was founded in 2014 to offer venture capital to individual investors. Since then, we’ve raised over $1.25 billion, invested in 1,300 portfolio companies, and built a team of 130 employees located across the U.S.

    We were recently ranked the #1 most active VC by PitchBook in 2023 and a top 20 VC in North America by CB Insights.

    Our AI team comprises four experienced venture investors who benefit from the work of a larger team of 30 investment professionals across Alumni Ventures, helping us source AI-related deals.

    What’s exciting about venture capital is the startups themselves—the technology, product, and business model innovations they create. Each innovation wave—from the internet to cloud computing to mobile, and now AI—has enabled the formation of new companies.

    I believe we’re now entering a generational opportunity with AI. We’re going to see a lot more companies and entirely new business models emerge. Understanding these changes is critical.

    Speaker 2:
    Some might ask: as a lawyer, what does the AI Fund have to do with me?

    I think there are several reasons:

    • Legal Opportunities: As you select and attract startups and corporate clients, you’ll need to discern which companies have strong growth potential in the age of AI.

    • Client Services: As you serve corporate clients embedding AI into their products or operations, they’ll need legal counsel that understands AI and its implications.

    • Education and Networking: Alumni Ventures hosts events for education and networking, which can benefit you and your firm.

    • Market Knowledge: The AI Fund provides access to information that helps you understand AI technologies and market changes, enabling better evaluation of companies and legal situations.

    We also run syndications at Alumni Ventures that investors in the AI Fund can access.

    Speaker 2:
    Regarding the AI Fund specifically:

    • This is our fourth AI Fund and the last one we’re raising this year.

    • It will invest in 15–20 deals over the next year.

    • The minimum investment is $25K.

    Through our syndication offerings, investors can view all the due diligence materials we use when evaluating deals. It’s a great way to gain insights into AI and other technologies.

    Now, I’d like to turn it over to my colleague, Chris Brown.

    Speaker 3:
    Hi, my name is Chris Brown. I’m in-house counsel at Alumni Ventures. Earlier in my career, I served as a judicial clerk, a professor of law, in-house counsel to a major registered investment advisor, and senior counsel at a litigation boutique focusing on disputes related to alternative investments. I’ve also served as an arbitrator with FINRA.

    My career has spanned a wide variety of practice areas, but they all now share the need to prepare for the disruptive effects of AI.

    Now, even though you’ve already heard from some of them, I’ll give the other panelists the opportunity to introduce themselves more formally, starting with Ed.

    Speaker 2:
    Thank you. My name is Ed Tsai, and I’m the Managing Partner at the AI Fund. I started my career at DCM Institutional Fund, which had $4 billion under management, and also worked in cybersecurity startups, including one that IPO’d in 2020.

    I’ve invested in several successful startups in the past, including Cruise, Palantir, Mission Labs, and Life360.

    I’ll turn it over to Sophia.

    Speaker 1:
    Hi everyone. My name is Sophia. I’m a Senior Principal on our AI team. Prior to becoming an investor, I worked in startups and banking, collaborating with founders and CXOs in cloud computing, SaaS, natural resources, and consumer sectors.

    Post-graduation from Yale School of Management, I immersed myself in the world of Web3, and I’ve been focused on AI and Web3 investments on our team. I’m especially interested in how AI can optimize both work and life.

    Speaker 3:
    And Andrew Poling is with us as well.

    Speaker 4:
    Thanks, Chris. Hi, I’m Andrew Poling, a Senior Counsel at Wilson Sonsini. A lifetime ago, I was a software developer before AI became a big thing, so I have a tech background.

    Now, I’m a lawyer doing tech transactions—helping clients primarily in the software sector with IP-centric deals like software licenses, mergers and acquisitions, advising on open source issues, and providing guidance on AI policies and procedures.

    Speaker 5:
    I guess that leaves me. Hi, my name is Winter Deagle. I’m a Partner at Shepherd Mullen in the Intellectual Property group. My expertise is in privacy and cybersecurity.

    For the last 15 years, my practice has been a hybrid: half the time, I help companies comply with what has become a complex web of global privacy and cybersecurity regulations while still driving business value; the other half, I litigate—defending companies in court or before investigators when they face investigative demands or lawsuits related to privacy or cyber issues.

    I’m a very experienced trial lawyer, having tried more than 30 cases to verdict. And, a long time ago, Chris and I were law school classmates.

    Speaker 3:
    With all that as a prelude, let’s get to the reason we’re here today.

    Beginning with the release of ChatGPT, the advent of AI has been called a generational opportunity. From news stories to graphic design to computer programs, AI has reduced the time and money required to produce work to a fraction of what it was just five years ago.

    But somehow, we’ve lost sight of a pressing question: how does the adoption of AI affect lawyers?

    While it may be hard to believe, given our popularity, there hasn’t been the same level of public interest in how AI will impact legal practice. I think that’s a missed opportunity.

    No matter what the future productivity landscape looks like, the need for IP protections, regulation, and dispute resolution will still exist. While details will change, the fundamentals of legal practice will likely survive AI.

    In fact, there’s reason to expect AI will bring growth to several practice areas. Today, we’re here to discuss the generational opportunities AI offers attorneys.

    To begin, I’d like to start with what’s probably best described as the soul of AI: intellectual property.

    With respect to IP ownership and protection—covering copyright, patent, trademark, and so on—where does AI stand? Panelists, are works of generative AI protected?

    Speaker 4:
    I’ll take the first stab at that.

    Generally, no. AI itself can’t be an owner, author, or inventor—it can’t be an author under copyright law or an inventor under patent law.

    The question is whether the users of these AI tools can be owners and have rights to the output of generative AI tools.

    The answer is yes.

    We recently saw a case involving an image called Zarya of the Dawn. The author of that work applied for copyright registration. The U.S. Copyright Office determined that they can claim authorship in the text, selection, coordination, and arrangement of the written and visual elements—essentially the expressive materials.

    However, they could not claim ownership of the actual generated images, as those were created directly by the AI tool.

    Speaker 4:
    The breadth and strength of protections on AI-generated works will be the main question, at least in the short term. How much protection does the user actually get for the outputs?

    There are things we, as lawyers, can think about and advise our clients on. For example:

    • How detailed are the inputs into generative AI tools? This will influence the analysis and potential protections.

    • Are users iterating on the inputs and generation process to demonstrate some level of creative control and authorship over the AI tool’s output?

    • What are the inputs being used? If users are leveraging third-party copyrighted works, that could undermine their argument for ownership of even the outputs.

    Additionally, we should consider trade secret protection for AI outputs. Even if copyright protection is unavailable, outputs may still be protected through contractual agreements.

    Speaker 3:
    I think there’s a venture-backed startup emerging in this space called Fairly Trained. It’s designed to analyze the inputs that go into AI’s outputs and certify whether they were trained on public source materials or copyrighted, protected works. This could be very useful for addressing these issues.

    Speaker 2:
    I’ll jump in. Recently, OpenAI’s CTO was interviewed and asked what data OpenAI was trained on—YouTube content, certain documents, etc.—and her response was somewhat uncertain with caveats like “maybe, not sure.”

    This highlights a big issue, especially for regulated industries. For example, I was talking with a generative AI company in the music industry. As you know, music labels are very protective of their content and assets. Record labels were very concerned about using AI-generated assets that didn’t have a clear origin or properly cleared IP for training.

    Sensitivity to liability varies by user. Some enterprises are highly risk-averse and will only use content trained on licensed data.

    For instance, there’s another company called Bria AI, based in Israel. It’s a generative AI platform trained solely on licensed materials (e.g., content from Getty). Some companies are conservative and want to avoid lawsuits, so they stick to cleared, licensed content or use startups that verify whether data is unlicensed.

    Regarding the earlier question about whether generative AI products or assets are protected: consider avatars being used in ads. These avatars have unique personalities and are essentially managed like human talent. These AI-generated “talents” can be protected by copyright laws for their voice, likeness, and persona. That’s going to become an important area.

     

    Speaker 4:
    It’s also worth distinguishing between:

    • Inputs for training: the materials used to train the AI model, and

    • Prompts for generation: the inputs users provide to create new outputs.

    Both raise questions about rights. Does the tool provider have rights to the training materials and to user prompts?

    As Ed mentioned, the provenance of training materials is a big issue right now. Contractually, it’s somewhat easier for tool providers to address user prompts. End-user license agreements typically grant rights to use those prompts, and providers may also use them to retrain models or improve services.

    We’ll discuss this in more depth shortly.

     

    Speaker 1:
    Building on Andrew and Ed’s point about training data: one suggestion from research articles is that large language model (LLM) applications should perform adequate data sanitization. This would prevent sensitive and contentious user data—often scraped from open sources—from being indiscriminately used in training.

    There’s also a trend emerging: smaller language models trained on proprietary datasets for specific use cases or industries. These could help safeguard intellectual property and create protective moats for both users and service providers.

    Here’s a funny anecdote about how people are combating AI scraping:

    We know that diffusion models like Midjourney and Stable Diffusion are trained on large datasets of online images, many of which are copyrighted, private, or sensitive.

    Two startups from the University of Chicago—Glaze and Nightshade—are addressing this:

    • Glaze: This tool analyzes how AI models are trained on human artwork and applies minimal, imperceptible changes to the artwork. To humans, the painting looks unchanged, but to AI, a realism painting might register as abstract, confusing the model.

    • Nightshade: This is more of an offensive tool. It distorts feature representations inside generated images. For example, AI might see a shadow of a cow on grass as a brown leather purse on grass.

    Some fan fiction writers are also publishing intentionally confusing content. This disrupts AI models that attempt to continue writing chapters based on earlier, infringed creative works.

    Speaker 3:
    There are many new ways to handle these capabilities—whether offensively or defensively.

    So far, we’ve focused on third-party input into AI’s output. But what about user-provided input? If I enter a prompt into ChatGPT, who owns that, and what can be done with it?

    Speaker 4:
    That’s a great question. Again, this depends heavily on the contractual terms between the user and the AI provider.

    I can’t comment specifically on OpenAI, Google, Stable Diffusion, Midjourney, or Anthropic since they’re clients of our firm. But generally:

    • Providers are working to ensure their end-user agreements give them specific rights to use user inputs (prompts).

    • Many agreements also allow providers to continue using prompts even after initial services are delivered, enabling ongoing model training and improvement.

    The rationale is that when a user returns later, they benefit from these improvements.

    However, rights holders (e.g., music industry copyright owners) are extremely sensitive about their materials being used to train large language models.

    We’re already seeing disputes and complaints from rights holders—across images, music, and other content—challenging the rights of AI providers to use their materials for model training.

    Speaker 3:
    You mentioned end-user license agreements earlier. In heavily regulated fields like finance, there’s often an arms race between established incumbents and regulators over what rights companies can reserve in their end-user license agreements versus what regulations put beyond the scope of contractual waiver.

    Winter, as a data privacy specialist, what’s your view on what end-user license agreements can actually claim and what’s protected, especially under prominent data privacy regulations like the CPRA and GDPR?

    Speaker 5:
    It would help if I took myself off mute.

    To a certain extent, we’re moving more and more—both in the U.S. and internationally—toward a framework where the data subject owns their own data. There are now deletion rights inherent in data protection laws, particularly in Europe and California. These rights even apply to employee data.

    When reviewing license agreements, one question I always ask is: do you even own the data going into this system, and can you legally grant the rights requested?

    By granting those rights, could you:

    • (A) breach your existing contracts, or

    • (B) violate people’s privacy rights?

    For example, if you take consumer information and, through a license, allow it to be used in an AI function that retains data for training the model, you face a problem. If a user later requests deletion of their data, you can’t comply—you’ve already granted perpetual rights.

    This creates risk and exposure if you don’t critically assess:

    • Who owns the data from the outset

    • What rights you have over that data

    • Whether you’ve altered those rights contractually

    • The evolving privacy laws that could make future compliance more difficult

    Future-proofing is critical to avoid trouble down the road.

    Speaker 3:
    On future-proofing, are there any emerging industry standards or policies and procedures for addressing these data use risks that are starting to gain traction?

    Speaker 5:
    We’re beginning to see this across several industries. Previously, companies and individuals were less concerned about long-term data rights. They were more permissive about granting broad, permanent rights to use data.

    Now, we’re seeing:

    • Very clear contractual requirements over data use

    • More thoughtful and limited approaches to handling individual data

    • Greater emphasis on de-identification of datasets

    • Critical evaluation of what data is truly necessary versus what’s merely available

    The goal is to ensure that datasets can be retained and used long-term without being undermined by excessive amounts of personal information that could trigger future regulatory or compliance issues.

    Speaker 3:
    Interesting. We’ve covered a lot about outputs and inputs, but let’s shift focus to the user.

    AI is making inroads into many heavily regulated and high-end professions. This raises concerns about malpractice and other regulatory risks.

    So the first question is: how does the standard of care in areas like medicine, law, and technology intersect with the adoption of AI? Could practitioners open themselves to malpractice liability by integrating AI into their services?

    Speaker 5:
    I’ll start. As a professional, your duty of care does not change regardless of the tools you use. Using AI is a choice you’ve made, and you’re responsible for that choice.

    We’ve already seen examples—like the lawyer who used ChatGPT to generate legal cases that turned out to be hallucinations. A motion for sanctions was filed against him, and it was granted.

    Similarly, for doctors or other professionals with fiduciary obligations, if they use AI, they remain fully responsible for their decisions and actions.

    I don’t believe courts will find that the standard of care changes because AI is used. But clients and patients may perceive that it should change, which adds to the complexity.

    Speaker 4:
    I agree. These industries are already regulated and have established ethical obligations.

    AI introduces many promising possibilities for professionals—lawyers, doctors, financial advisors, and beyond.

    But there are real risks. For example, in Michigan, the government used an algorithm to detect fraudulent unemployment claims. It incorrectly flagged many legitimate claims, causing harm: people lost financial assistance, lost their homes, and some had to file for bankruptcy.

    This shows that while AI can enhance services, it can also cause serious harm if not implemented carefully.

    Regulations will evolve in response. Some states are already addressing these issues, and the EU is considering the AI Act. This legislation will:

    • Define high-risk AI functions

    • Prohibit certain AI use cases

    • Impose new obligations on AI providers and users

    These protections will focus heavily on professional industries where consumer harm is more likely.

    Speaker 2:
    I think this is a fascinating discussion.

    In the Bay Area, companies like Cruise and Waymo operate self-driving car services. There’s a debate about safety standards:

    • Should these vehicles be held to a zero-error standard, or

    • Should they only need to be safer than human drivers?

    I believe the public will expect them to be much safer than human drivers. If they were only as safe, we’d see frequent negative reports.

    For policymakers, it’s important not to set perfection as the standard. If perfection is required, AI innovation could be significantly slowed.

    Thoughtful policies and protections are necessary to encourage safe use of AI in areas like medicine, transportation, and education—without halting innovation entirely.

    It’s a tough balance. Yes, there might be accidents or severe injuries caused by AI. But if, over a million miles, autonomous cars save more lives than human drivers, it’s arguably a net positive.

    This kind of nuance is critical when deciding how malpractice and liability should be addressed in AI-driven industries.

    Speaker 3:
    As we leave this segment, I want to circle back to a term Winter emphasized earlier: choice.

    To sound a cautionary note, if you’re using AI to drive a car, you’ve clearly made a choice to expose yourself to whatever liability follows. But with more generalized applications of AI—like ChatGPT—it’s surprisingly easy to get them to say things that could reasonably be construed as professional advice.

    For example, I’ve asked AI bots, “How do I file my taxes?” Not because I actually want to risk trouble with the IRS, but simply to see what they would say. I’ve been surprised by how often they give very high-level and sometimes wrong instructions in specialized areas.

    Anyone designing or offering AI tools should keep this in mind and impose constraints on what issues the tool will address and which it will not.

    As we move forward, Ed and Sophia, have you seen any noteworthy new AI companies emerging in the professional services industry?

    Speaker 2:
    Yes, we just invested in a company called BenchIQ. They help lawyers understand prior decisions and why they were made in certain ways.

    It’s very early for the company, and they have some secret sauce we can’t publicly discuss yet. But I’d say that for anyone wanting deeper knowledge of cases or legal precedents, BenchIQ is worth exploring.

    Several law firms are already starting pilot programs with them, and Wilson Sonsini, I believe, has also invested in the company.

     

    Speaker 1:
    Maybe I’ll speak more broadly about trends in AI companies making an impact in the service industry.

    At NVIDIA’s GTC conference this week, Jensen Huang closed his two-hour keynote with a lineup of humanoid robots on stage. He was even joined by two Star Wars BB droids from Disney.

    This made me feel very excited about the prospect of humanoid robots enhanced through artificial intelligence.

    Robots are already being used in manufacturing, elder care, and food services. The next generation of humanoid robots will likely have:

    • Greater capability to understand natural language

    • The ability to learn physical movements simply by observing humans

    Integrating generative AI will empower these robots to make decisions and take actions based on a variety of inputs—language, visual data, demonstrations, and accumulated experience.

    This is incredibly exciting and will definitely impact the service industry. However, it also depends heavily on the advancement of multimodal AI, which combines multiple data types—text, images, voice—into more robust data stores, enabling richer and more creative content generation.

    Of course, we must also remain mindful of data security, data privacy, and everything else we discussed earlier in this webinar. This is a rapidly evolving space.

    Speaker 3:
    Continuing along the spectrum of potential risks, there’s also the issue of defamation liability.

    Recently, AI-generated research into public figures has often been wrong—and it seems to make errors disproportionately about controversial figures, which is the last place you want mistakes.

    There have already been lawsuits against OpenAI over hallucinated complaints about public figures that never actually existed.

    What’s the panel’s view on the potential for defamation liability when AI goes rogue and starts inventing facts or controversies about public—or worse—private figures?

     

    Speaker 5:
    One of the first lawsuits we’ve seen is pending against OpenAI in Georgia.

    In that case, an AI hallucination falsely claimed that a conservative radio host had embezzled money from a gun rights organization. That false information was republished and reshared multiple times.

    The lawsuit is against OpenAI, but there’s also potential liability for anyone who republished the defamatory statement.

    Importantly, you don’t have to create defamatory material to be held liable. You just have to republish it without investigating its truthfulness.

    Some defendants have argued that:

    • It should have been obvious the statement was false, or

    • The AI engine warned users that its results might be inaccurate and needed verification

    So far, these arguments haven’t been successful. Most defamation cases are moving forward past the motion-to-dismiss stage.

    If you can’t get out early, you’re more likely to face repeated lawsuits.

    When it comes to defamation, it really comes back to a “trust but verify” approach. If the only source for information is a chatbot, you should double-check it before sharing.

     

    Speaker 3:
    What’s your view on how the actual malice standard—which applies to public figures—might intersect with AI?

    Would it be harder to prove actual malice from a robot than from a human?

     

    Speaker 5:
    That’s an interesting question, and courts haven’t fully addressed it yet.

    If you think about it, an AI engine doesn’t have thoughts or intentions. Under a very strict interpretation of actual malice for public figure cases, you likely couldn’t successfully argue that the AI itself acted with malice.

    However, that doesn’t mean the developer is immune.

    We often think of AI as if it’s a real person, but it’s just a software program. The true defendant in a defamation case would be the developer, not the chatbot itself.

    So the question becomes: did the developer act with actual malice or otherwise fail to act appropriately, depending on the plaintiff and circumstances?

    Speaker 3:
    That’s a thoughtful look at how an old doctrine might be applied to a new set of circumstances.

    It also makes me wonder how proximate causation will be treated in software design cases.

    It feels like a long leap to hold a company liable for programming a ChatGPT application that then, indirectly, results in a widespread defamatory news story about someone—just because the language model extrapolated patterns and hallucinated false information.

    That seems much more attenuated than the classic Palsgraf case where someone bumps another passenger on a train and a bomb explodes.

    These are all questions that smarter legal minds than mine will need to resolve as AI law continues to evolve.

    Speaker 3:
    To conclude this topic, what happens when a user intentionally tries to get an AI to produce false information?

    We might all remember Microsoft’s halted attempt at a chat AI years ago, “Tay Tweets,” which the internet quickly manipulated into generating unsavory statements.

    This raises interesting questions about how responsibility will be apportioned between the programmer and the user when an AI program outputs harmful or misleading information.

    Moving to the next topic: AI is increasingly well-represented in manufacturing—not just in the world of data, but in the world of physical products.

    That introduces potential products liability claims. If AI is involved in automating manufacturing and a product defect occurs, how will strict product liability doctrines interact with how AI was designed and integrated into the manufacturing process?

    Andrew, what are your thoughts?

     

    Speaker 4:
    The short answer is that this is yet to be determined.

    Ed raised an interesting point earlier about whether expectations for non-generative AI systems will be that they’re:

    • Completely error-free, or

    • Simply safer than manual, human-controlled processes

    I’m not sure yet, and it’s a significant philosophical question.

    Part of the answer will depend on how consumer-facing these AI-driven systems are, as that will highlight potential risks.

    So far, most proposed laws or bills under consideration focus on processes—what safeguards are in place when using AI?

    • Is there human review or intervention for quality control?

    • Are there mechanisms to ensure these systems work properly?

    The absence of such safeguards will likely increase liability exposure.

    Additionally, regulations such as the EU AI Act are beginning to classify AI use cases by risk levels:

    • High-risk AI use cases

    • Low- or no-risk AI use cases

    These classifications will help determine both the liability standards and the levels of scrutiny applied.

     

    Speaker 5:
    I agree with Andrew. This also highlights an interesting point for Andrew’s practice, which often focuses on software and service agreements:

    Is AI treated as a product or as a service?

    • If it’s a product, strict liability applies—particularly for mass-produced products

    • If it’s a service, a negligence standard would likely apply instead

    When planning a deployment or drafting agreements, you’ll want to think carefully about:

    • Structuring it as a service rather than a product, if possible

    • Whether its functionality effectively resembles a product, which could still invite strict liability

    Some courts may lean toward a negligence standard if AI is essentially stepping into the role of a human being.

    However, if AI functions as a standalone product, we may see courts applying strict products liability.

    This will become a key area of litigation and negotiation. Whether you’re working with Andrew on contracts or considering AI as an investment vehicle, you’ll need to carefully define whether it’s a product or a service.

     

    Speaker 3:
    That’s a fundamental dilemma we’ll need to resolve before tackling many of the other legal issues we’ve discussed today.

    Speaking of issues raised in this webinar, so far we’ve only heard my questions—and I’m sure everyone’s ready for new voices.

    Let’s turn to audience questions.

    The panelists have already addressed many topics, but here’s one we haven’t covered yet:

    What are some of the biggest areas where major law firms are seeking efficiency gains through the adoption of AI?

     

    Speaker 4:
    I can try to answer that—though I’m not sure how much I’m authorized to share.

    At a high level, law firms want to stay competitive by automating as much as possible, especially tasks that are repetitive or less interesting for human attorneys.

    We expect large law firms to focus AI adoption on:

    • Drafting NDAs

    • Creating standardized, relatively simple agreements

    • Other high-volume, low-complexity tasks

    These are the kinds of areas that cost firms and clients significant time and money, but don’t usually require bespoke, high-level legal reasoning.

     

    Speaker 5:
    You can also see this trend by looking at the products vendors are building for law firms.

    • Many tools are focused on document review

    • Contracts management and contracts review

    • Enhancing legal research efficiency

    Other products target error detection and correction in legal documents.

    All these tasks could, in theory, be performed much more efficiently if supported by AI systems instead of requiring human-only workflows.

    Speaker 2:
    Yeah, I’d like to jump in. I think there are two things to consider. One is the user—whether it’s the law firm or the corporate client. Different users want different things for different reasons.

    For example, I spoke with a company that initially tried selling to law firms but had a hard time, then switched to selling to corporate clients. What they were offering was automated drafting for simple documents.

    In practice, after drafting 200 documents and making small changes, you might still want a lawyer to review them—but now that review might take 15 minutes instead of a full hour. So, who the buyer is matters a lot.

    The second consideration is where the intelligence or benefit of AI comes from.

    I think AI is particularly strong in two areas:

    1. Processing and creating large amounts of data:
      As Winter mentioned, AI is useful for legal research, document review, and e-discovery. Instead of having 30 people sitting in a room overseen by associates, manually reading through documents, you could have AI do the first pass. It can quickly search through source code, Slack messages, chats, and texts, flagging the most relevant materials. You’ll likely still need human review, but AI significantly reduces the manual workload.

    2. General, simplified drafting:
      Over time, a lot of basic drafting will increasingly be assisted by AI, speeding up repetitive work and freeing lawyers to focus on higher-level tasks.

     

    Speaker 3:
    I know we’re running short on time, so I’ll wrap things up. I think we’ve addressed most of the audience questions throughout our discussion.

    Ed and Sophia, do you want to take us home?

     

    Speaker 2:
    Sure. This has been a great discussion from all the panelists—I’ve learned a lot as well.

    We’re always hosting interesting webinars at Alumni Ventures, so I hope you’ll stay tuned for future sessions.

    As a reminder, AI Fund 4 is closing at the end of March. If you have any questions about the fund, please reach out. Our colleagues will send out two links—one to our fund materials and data room, and another for booking a call with us.

    You can also email [email protected].

    This will be the last AI fund we’re offering this year, and with so much happening in the AI space, this is a great way to stay connected to the innovations unfolding in 2024.

     

    Speaker 3:
    I’d also like to thank our panelists for their time and insights today.

    If anyone has final remarks, the floor is open.

    It seems like we’ve covered everything for now, so thank you to everyone who joined us. I hope you enjoyed today’s discussion, and given the rapid pace of AI’s evolution, I’m sure we’ll have many more occasions to explore these issues together soon.

    Great—thank you, everyone!

     

About your presenters

Edward Tsai
Edward Tsai

Managing Partner, AI Fund

Edward has 15+ years of investment experience in the U.S. and China, including a successful track record with investments such as Cruise Automation (acq. by GM), Life360 (IPO), Palantir (IPO), and Brave Software. In addition, Edward has served on the limited partner advisory committees at Cendana Capital and Ten Eleven Ventures, and he has deep operating experience at tech and cybersecurity companies. Most recently, he was Director of Investments at enterprise security company Qianxin, where he led $700 million in fundraising, ran multiple M&A deals, and managed a large investment portfolio. As Assistant GM for Qianxin, he also incubated their cybersecurity spinout fund Security Capital. At 360, he led International Investments and Strategic Development. He started his venture career as Vice President at DCM, a global early-stage VC firm managing $4 billion. He holds BS and MS degrees in Computer Science from UCLA, where he is a Kauffman Fellow (class of ’15).

Sophia Zhao
Sophia Zhao

Partner, AI Fund

Sophia brings a wealth of experience in capital advisory, corporate development, and operational optimization, establishing impactful collaborations with CXOs and Founders. With a diverse industry exposure encompassing cloud computing, mining and minerals, consumer goods, and Web3, Sophia has been at the forefront of transformative technologies. Since 2018, she has been immersed in the crypto universe, working at Galaxy Digital, Huobi US, and Crypto.com. In these roles, Sophia engaged with startups and institutional clients on capital raising and trading across the Americas, EU, and Asia regions.

Actively fostering innovation and mentorship, Sophia serves as a mentor and judge at prestigious institutions such as Yale’s Tsai City for Innovation, Berkeley’s Blockchain Xcelerator, Techstars, and Layer 1 protocols, including Ethereum, Algorand, and Solana. She maintains close ties with the blockchain communities at Stanford and Yale.

Driven by a passion for shaping the future through frontier technologies, Sophia is currently supporting AI data and applications deals within her team. She holds a BBA from Simon Fraser University, an MBA from the University of British Columbia, and an MAM from the Yale School of Management.

Chris Browne
Chris Browne

Deputy General Counsel

Chris is an in-house attorney for Alumni Ventures, a national venture capital firm focusing on AI. He has been a judicial clerk at trial and appellate levels, an adjunct professor of law, in-house counsel to a prominent national registered investment advisor, senior counsel to a well-known litigation boutique, an arbitrator for FINRA, and a solo practitioner providing advice and representation to hedge funds and other financial institutions focused on private investments.

Andrew Poling
Andrew Poling

Senior Counsel, Wilson Sonsini Goodrich & Rosati

Andrew Poling is senior counsel in the Boston office of Wilson Sonsini Goodrich & Rosati, where he is a member of the technology transactions practice. He specializes in strategic commercial transactions and mergers and acquisitions for technology companies. His clients range from multinational enterprises to cutting edge start-ups in various industries, including cloud computing, electronic gaming and entertainment, fintech, and other software-centric industries. Andrew’s practice covers a range of activities associated with acquiring and commercializing technology and intellectual property. These activities include drafting and negotiating a variety of intellectual property-focused and complex transactions, such as software license agreements, cloud computing agreements, services agreements, reseller agreements, supply agreements, and agreements for other types of commercial and strategic transactions, such as mergers and acquisitions. Prior to joining the firm, Andrew practiced in the technology transactions group at an intellectual property boutique based in Boston. Prior to law school, Andrew was a software developer and database architect developing e-learning websites, programs, and games for training pharmaceutical, biotech, and medical device sales teams.

Wynter Deagle
Wynter Deagle

Partner, Sheppard Mullin

Wynter Deagle is a partner in Sheppard Mullin’s Privacy and Cybersecurity practice group. She is an experienced trial attorney whose practice focuses on defending individual and class actions relating to privacy, consumer protection, cybersecurity, and data collection, use and storage practices. Outside of the courtroom, Wynter designs global privacy and cybersecurity compliance programs that satisfy legal obligations while driving business value.

Webinar Registration

Home
To the top
  • Investors
    • Funds
    • Syndicates
    • Institutionals
  • Entrepreneurs
    • How We Help
    • CEO Services
    • Community Store
  • Connect
    • Our Company
    • Fellow Program
    • AV Careers
    • Press Center
    • Portco Job Board

Follow us

LinkedIn
X
Youtube
TikTok
Medium
Instagram
Facebook
Threads

#AlumniVentures


AV Headquarters
670 N. Commercial Street
Suite 403
Manchester, NH 03101
General inquiries: [email protected]

Press inquiries: [email protected]

603-518-8112


Follow us

LinkedIn
X
Youtube
TikTok
Medium
Instagram
Facebook
Threads

#AlumniVentures


Neither Alumni Ventures nor any of its fund are sponsored by, affiliated with, or endorsed by any school. Venture capital investing involves substantial risk, including risk of loss of all capital invested. Achievement of investment objectives cannot be guaranteed. Past performance does not guarantee future results. To see information on all AV fund investment performance, please see here. To see additional risk factors and considerations, please see here.

No content hosted on this website is an offer to sell, or a solicitation of an offer to purchase, any security. Such offers are made only pursuant to the formal offering documents for the funds concerned, which describe the risks, terms, and other important information that must be carefully considered before an investment is made. Alumni Ventures and its affiliates provide advice only to venture capital funds affiliated with AV. No information on this website may be relied upon as personalized advice to any recipient.

Please see Alumni Ventures’ Legal & Privacy Policy here and additional Investor Policies here.

Testimonial/Endorsement Policy: Testimonials and Endorsements were provided without compensation but each provider has a relationship with AV from which they benefit. Management of portfolio companies have received, and may in the future receive, investments from AV, which constitutes a conflict of interest. All views expressed are the speaker’s own. The providers of the testimonials/endorsements were not selected on objective or random criteria, but rather were selected based on AV’s understanding of its relationship with the providers of the testimonials/endorsements. The testimonials and endorsements do not represent the experience of all AV fund investors or all companies in which AV funds invest.

Alumni Ventures is America’s largest VC firm for individual investors based on the combination of total capital raised, number of investments, and number of investors of leading VC firms as reported by Pitchbook and other publicly available information reviewed by AV.

Video Policy: By consuming this content I acknowledge that I may be considering an investment with AV funds for my own or my client’s account. I agree that information contained herein may not be relied upon or used for any other purpose.

Co-investors: Co-investors are shown for illustrative purposes only, do not reflect the universe of all organizations with which AV has co-invested, and do not necessarily represent future co-investors. The identity of a co-investor does not necessarily indicate investment quality or performance.

© 2025, Alumni Ventures. All rights reserved.

This site uses cookies. Read more about cookies.