The global surge in artificial intelligence (AI) has accelerated investment and M&A targeting the technology and talent of Japanese AI startups.
While Japan lacks a comprehensive regulation specifically for AI—relying instead on "soft law" frameworks like the AI Guidelines for Business Ver1.11—this does not imply a lack of legal risk. Japanese AI startups remain fully subject to existing regulations, including the Copyright Act and the Act on the Protection of Personal Information (APPI). Therefore, foreign investors targeting Japanese AI startups need to navigate a unique local legal landscape.
Below are checkpoints of critical legal issues for investment/M&A targeting Japanese AI startups and its due diligence (DD).
1 https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20250328_1.pdf (In Japanese)
https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20240419_14.pdf (The English version (provisional translation) published with the original text.)
1. Intellectual Property
Japan’s Copyright Act (specifically Articles 30-4 (use without the purpose of enjoying the thoughts or sentiments) and 47-5 (minor use)) is often viewed as "developer-friendly" regarding machine learning. However, statutory requirements are nuanced. Recent litigation—such as the August 2025 lawsuits by major publishers (Asahi, Nikkei, and Yomiuri) against Perplexity—highlights the growing risks of unauthorized data use.
- Training Data Legality: The DD must scrutinize whether the target’s data processing involving copyrighted works complies with the complex requirements of Article 30-4 or 47-5 of the Copyright Act.
- Secondary Infringement Risks: Even if an end-user is the one who generates infringing content, the AI developer/provider may still bear liability for infringement. Investors should assess whether the target has implemented sufficient technical and contractual safeguards to mitigate these risks.
- Ownership of Trained Models: When the target develops AI for specific clients, IP ownership of the trained models and parameters is often heavily negotiated. The DD should review relevant agreements to confirm if the target retains the rights to use these models for its own improvement or other clients.
2. Data Privacy & Cybersecurity
- Legal Basis for Handling PII: If the target receives any PII (personally identifiable information) from clients to train/provide its AI, investors must validate the legal basis for such transfer. For example, if the target acts merely as an "outsourcee" (handling PII on behalf of a client), using that PII to improve its own proprietary models may exceed the scope of outsourcing and violate the APPI.
- AI-Specific Security: Beyond standard cybersecurity, AI faces unique threats like "data poisoning" (malicious data injection) and "model inversion" (extracting training data). Leading DD practices now involve confirming whether the target employs "AI Red Teams" to stress-test for these specific vulnerabilities.
3. Open Source AI
"Open Source" in AI introduces complexities beyond traditional software code, involving model weights and training datasets. Licenses like the Stability AI Community License2 (imposing revenue caps of US $1,000,000) or Responsible AI Licenses3 (restricting usage scope such as harm and discrimination) create operational constraints.
- Policy Audit: It is recommended that investors confirm that the target maintains a robust "OSS Policy" covering weights and datasets as well as source codes, since license violation could jeopardize the proprietary nature of the core technology.
2 https://stability.ai/community-license-agreement
3 https://www.licenses.ai/ai-licenses
4. AI Governance
Beyond security, AI developers/providers also need to manage AI-specific risks such as "hallucinations" (false outputs) and bias.
- Governance Framework: The DD should evaluate whether the target has established an appropriate internal governance system—organizational structures and policies—to identify and mitigate these risks, consistent with the government’s AI Guidelines for Business Ver1.1.
5. Foreign Investment Regulations (FEFTA)
Under the Foreign Exchange and Foreign Trade Act (FEFTA), foreign investors generally must submit a prior notification to the Japanese government 30 days before acquiring shares in companies operating in "Designated Business Sectors."
- Prior Notification: Since AI businesses often fall under categories like "Information Processing Services," "Custom Software Development," "Embedded Software," or "Package Software," it is critical to ascertain whether the target’s specific activities trigger this mandatory prior notification requirement.
Given the latent risks within Japan’s seemingly flexible regulatory environment, investors should conduct robust DD and consider incorporating AI-specific representations and warranties into definitive agreements.
Please note that the items above are not exhaustive. The legal landscape remains fluid and will inevitably evolve alongside AI technology.


