📅 Sessions from 2026-02-16
Her First Algorithm, India's Next Breakthrough
The session at the India AI Impact Summit 2026 showcased India's thriving youth-driven AI innovation ecosystem, highlighting the transformative impact of the Atal Innovation Mission's systemic support through school-based maker spaces and mentorship. Three young female innovators—Akila, Shubhangi, and Shishi—presented their AI-powered solutions addressing critical gaps: direct linkage for artisans to global markets, grain storage optimization, and digital health support for chronic fatigue syndrome. The segment emphasized the perseverance, creativity, and social empathy fueling their journeys and stressed the critical role of government initiatives, like the Atal Tinkering Labs, in democratizing access to technology and enabling promising solutions to scale nationwide. The event further underlined the need for inclusivity, mentorship, and infrastructure in nurturing the next generation of AI entrepreneurs, setting an inspiring context for India's vision of a developed 'Viksit Bharat' by 2047.
- Three young women innovators presented AI-driven solutions emerging from Atal Tinkering Labs: Akila's AI e-commerce platform for artisans, Shubhangi's Intelligent Grain Storage System (IGSS), and Shishi's multilingual digital health tool for chronic fatigue syndrome.
- Akila's platform aims to connect 76,000 towns, 2.65 lakh districts, and 6.65 lakh panchayats, targeting global markets to empower Indian artisans.
- Shubhangi's IGSS utilizes AI for real-time grain warehouse monitoring, shifting from reactive to proactive storage management.
- Shishi's Innervision app supports chronic illness patients, with plans for multilingual and global expansion.
- All three innovators credited mentorship, institutional support from teachers, and Atal Tinkering Labs for their progress.
- The Atal Innovation Mission, launched in 2016, now supports over 10,000 Atal Tinkering Labs nationwide, representing deep government investment in grassroots innovation.
- India still faces major gaps in special education: Only 1% of 3.5 crore special needs children are identified, with a shortage of educators (1.2 lakh for 3.5 crore children).
- The event's showcase of both current students and past beneficiaries (now entrepreneurs) highlights the generational impact and scalability of the Atal Innovation Mission.
- The session reinforces India's target of becoming a developed nation ('Viksit Bharat') by 2047 through large-scale, inclusive AI-driven innovation.
Building a Sovereign AI-Enabled Public Health Surveillance Grid | India AI Impact Summit 2026
The opening session at the India AI Impact Summit 2026, organized by Rail Corporation of India Limited, established a strong focus on harnessing artificial intelligence (AI) for measurable public good, with healthcare as a primary domain. Senior leaders from government, industry, and academia highlighted India’s potential to leapfrog typical digital health milestones, drawing parallels to UPI’s payments revolution. The session set the stage for two central panel discussions: collaboration for public health systems strengthening and ensuring the inclusion of underserved populations in AI healthcare rollouts. Panelists, including representatives from Cure.ai, National Health Authority, Superviti AI, and Africa CDC, shared early success stories (such as Cure.ai’s disease detection via AI in underserved geographies), emphasizing evidence-based outcomes, scalable deployment, and public trust. A key theme was India's leadership in delivering world-first population-scale AI solutions, not only benefiting its own citizens but serving as a blueprint for the Global South. Discussions underscored the necessity for secure digital infrastructures, robust data governance, affordability, accessibility, and ethical use of AI to ensure that innovation narrows—not widens—healthcare gaps.
- Rail Corporation of India Limited hosted the session, emphasizing its transition from digital infrastructure provider to comprehensive digital health solutions leader.
- The summit theme centers on AI delivering real, measurable impact in public institutions, especially healthcare.
- Panel discussions focus on (1) collaboration to strengthen public health systems and (2) inclusion—delivering AI benefits across all geographies and economic strata.
- Highlight on Make in India technologies and collaborations with startups like Cure.ai, which operates in over 100 countries and was first globally to publish AI disease diagnostics research in the Lancet.
- Cure.ai’s AI has screened over 200,000 people in Goa, leading to a 50% reduction in diagnosis time for diseases like cancer, and has had WHO approval for TB detection without human intervention.
- Indian deployments see AI reaching the neediest first, reversing the usual technology trickle-down; AI is deployed in areas with few or no radiologists (e.g., Malawi, Timor-Leste).
- Economic benefits highlighted: early detection reduces system costs and improves outcomes (e.g., treating stage 1 cancer is far less burdensome than late-stage treatment).
- Leadership from Indian and African digital health sectors present, supporting public digital infrastructure initiatives (ABDM, UHI) as possible transformative global models.
- Session chair emphasized the need for secure infrastructure, responsible data governance, and public trust alongside technological advancement.
- Panel included voices from government (NHA), academia (IIT Delhi), industry leaders (Cure.ai, Superviti AI), and Africa CDC, setting a collaborative, multi-sectoral tone for the summit.
AI, Policy, and the Rule of Law: A Policymakers’ Dialogue
This session at the India AI Impact Summit 2026 focused on the approaches of India and Germany toward AI governance, highlighting the importance of inclusivity, constitutional values, and stakeholder participation in regulatory frameworks. Stephan Stampy from CAT outlined the rapid pace of AI advancements and the need for balanced rule-making, referencing potential impacts such as AGI and the global necessity of ethical considerations. Indian parliamentary leaders emphasized a constitution-based, 'soft-touch' regulatory approach prioritizing equality, transparency, accountability, and especially inclusivity—mindful of the country’s digital divide. They advocated for dynamic, flexible governance to promote innovation without stifling growth. German representatives underscored the value of mutual learning and adaptive regulatory cycles, suggesting regulatory learning and participatory mechanisms. Industry leaders added that India’s techno-legal, incremental approach, grounded in operationalizing existing laws, can serve as an actionable way to embed responsible AI governance as a driver for long-term value and resilience, rather than mere compliance. Cross-jurisdictional collaboration between India, Germany, and the EU was positioned as key to anticipating risks and leveraging diverse experiences for robust regulatory evolution.
- The session opened with representatives from India and Germany emphasizing the global importance and local nuances of AI governance.
- India is adopting a 'soft-touch,' constitution-based approach to AI regulation, prioritizing inclusivity, transparency, accountability, and equality.
- India’s regulatory philosophy is to avoid over-regulation that could impede innovation, particularly in light of existing digital divides.
- Germany and EU speakers highlighted ongoing regulatory learning, suggesting policy cycles that involve early stakeholder input and policy feedback loops.
- Practical mechanisms such as policy pilots and participatory tools were stressed as ways to implement inclusive governance.
- Industry voices underscored a techno-legal approach: reviewing and operationalizing existing laws (e.g., privacy, consumer protection) before introducing new regulations on AI.
- There is strong emphasis on cross-border and cross-sectoral learning—for example, India learning from the EU AI Act’s risk-based approach, while the EU learns inclusivity from global South experiences.
- Stakeholder engagement, both upstream and downstream, is considered vital for responsive and effective AI governance.
- Regulation should not be seen as mere compliance but as a forward-looking tool to ensure AI products are safe, unbiased, and bring long-term value.
Prime Minister Narendra Modi inaugurates Expo at the AI Impact Summit, 2026
The India AI Impact Summit 2026 emerged as a powerful demonstration of India's commitment to inclusive, responsible, and scalable AI growth. With the participation of 13 country pavilions and 600 startups, the event showcased India's burgeoning AI ecosystem and its global connectivity. Prime Minister's extensive tour of the expo highlighted India's focus on AI innovations across key sectors—healthcare, governance, education, and agriculture—emphasizing the power of AI in reaching the grassroots and addressing local challenges, especially in rural and remote areas. The summit spotlighted indigenous success stories like 'Sarvam', a multilingual AI platform tailored for Indian languages, and India’s drive towards sovereign AI models, integrating both hardware and software solutions at globally competitive costs. India's approach, contrasting Western for-profit and Chinese surveillance-centric models, is rooted in trust, accountability, safety, and democratization of AI. The government’s decade-long encouragement of startups—through policy support, funding, and ecosystem building—has led to a thriving, impact-oriented innovation environment, shifting societal mindsets towards entrepreneurship and technological adoption. Policies discussed at the summit, such as introducing AI in school curricula, piloting AI-powered public distribution in Gujarat, and launching the 'Bharat Vistaar' initiative for agricultural support via AI chatbots, underline India's intent to blend technology with social impact and set a template for the Global South.
- 13 international country pavilions and 600 startups participated, showcasing global and domestic AI innovation.
- India’s PM visited various AI stalls, with particular focus on homegrown solutions like 'Sarvam' (a multilingual AI model supporting Indian languages) and 'Bharat Jain' (India's sovereign AI system).
- India aims to reveal integrated hardware and software AI solutions, targeting data center costs at almost half the global average, and GPU manufacturing at 30-40% of global costs.
- AI use-cases highlighted include health diagnostics in rural areas, AI-driven token-based public distribution pilots in Gujarat, and education-policy reforms introducing AI teaching in early grades.
- 'Bharat Vistaar' initiative, powered by AI chatbots (notably 'Bharati'), is set to offer agriculture advice and support to farmers, with imminent rollout in Jaipur.
- India’s AI model emphasizes accessibility, affordability, safety, and trust—standing apart from Western profit-led and Chinese privacy-questioned models.
- Government support—via Startup India, Atal Tinkering Labs, NITI Aayog, and targeted funding—has fundamentally shifted societal views on entrepreneurship, fostering a strong startup and AI ecosystem.
- Summit dialogues affirmed that India’s AI journey is focused on tangible social impact, measuring success via outcomes for citizens, not just technological prowess.
- India positions itself as a Global South leader by developing scalable AI solutions tailored for local and emerging market needs.
AI by Her (All Sessions)
The session at the India AI Impact Summit 2026 explored the multifaceted challenges and opportunities of scaling AI across three major sectors: finance, healthcare, and agriculture. Panelists highlighted that real-world deployment of AI models faces significant hurdles, including unstructured or insufficient data, rapidly evolving regulatory landscapes, and the need for deep domain-specific talent—especially those combining technical and sector expertise. In finance, building trust, overcoming regulatory uncertainties, and accessing clean, AI-ready datasets are fundamental. In healthcare, the spectrum runs from lightly regulated personal wellness applications to high-barrier, government-driven enterprise solutions, each requiring different paths to meaningful scale and procurement. Agriculture’s potential is vast but constrained by trust deficits, fragile unit economics, weak data infrastructure, and barriers to accessing proprietary Indian datasets. Across sectors, government involvement—particularly through open data initiatives, policy direction, and procurement mandates—plays a pivotal role in bridging gaps and enabling startups to achieve both product-market fit and massive scale. The panel concluded that while India’s demographic and market size creates unique scaling opportunities, success hinges on overcoming regulatory, data, and trust challenges through collaboration, transparent policy, and open data sharing.
- AI startups face real-world scaling challenges: unstructured data, regulatory flux, and lack of sector-specific AI talent.
- In finance, core barriers are data quality, regulatory uncertainty (with rules changing monthly), and building customer trust amid concerns about AI model bias and cybercrime.
- Healthcare AI scaling is easier in personal wellness (less regulation), harder in enterprise and government solutions, where strict benchmarks, public procurement policies, and trust are paramount.
- India AI Mission guarantees procurement for winners of government competitions, and cost/impact benchmarks can override traditional price-based selection in health sector procurement.
- Agriculture’s scaling potential is enormous (120-150 million Indian farmers), but limited by trust deficits, thin unit economics, fragmented distribution, and lack of accessible, clean, contextualized Indian agricultural data.
- Open sourcing India's proprietary sectoral datasets (agriculture, health, etc.) is seen as a transformative step for AI innovation and global competitiveness.
- Government-backed open data initiatives like AgriStack (70 million farmer IDs issued) are beginning, but broader data availability and affordable collection are still needed.
- Across sectors, building end-customer trust in AI outputs is as critical as regulatory compliance and data quality.
- Distribution partnerships and deep domain expertise are essential for scalable, sustainable impact, especially in agriculture and fintech.
AI4All: Driving Inclusive Innovation and Opportunities
The session highlighted the official launch and strengthening of a global collaboration between India (represented by IIT Ropar) and Israel to advance the use of artificial intelligence (AI) in agriculture. With IIT Ropar initiating the Anam AI Foundation under the Ministry of Education as a national center of excellence for AI in agriculture, this partnership aims to leverage Israel's agri-tech expertise and India's agricultural scale. The collaboration is marked by the presence of academic, diplomatic, and policy leaders from both countries, emphasizing joint research, technology transfer, and the introduction of a Bachelor of Technology (B.Tech.) program in digital agriculture at IIT Ropar. This move aims to address key challenges such as food security, productivity, and farmer welfare through AI-powered solutions in weather forecasting, soil health, and precision farming, setting a precedent for responsible and globally cooperative AI-driven agricultural development.
- India and Israel announced a major collaborative initiative to push the global AI agriculture agenda.
- IIT Ropar, located in Punjab, is the nodal center of excellence for AI in agriculture in India via the Anam AI Foundation, under the Ministry of Education.
- IIT Ropar is the first IIT to launch a B.Tech. program in digital agriculture.
- A new center of excellence, "AI for Agriculture," has been established at IIT Ropar.
- The partnership stems from an Indian delegation's visit to Israel and ongoing dialogues, facilitated by the Embassy of Israel in India.
- Multiple verticals are targeted, including AI-driven weather forecasting, soil testing, health monitoring, and productivity optimization for farmers.
- The summit features leaders from leading Israeli agri-tech and policy organizations, underscoring bilateral knowledge exchange.
- This collaboration is positioned as a model for responsible and impactful cross-national AI development in agriculture.
AI Is Your New Teammate: How to Work Smarter, Build Faster, and Think Bigger
This session at the India AI Impact Summit 2026 offered deep insights into how artificial intelligence (AI) is fundamentally transforming professional and personal lives in India. The speaker highlighted a significant mindset shift: AI is no longer just a question-and-answer tool but is becoming a constant companion—used for research, friendship, productivity, and even personal relationships. The conversation covered generational differences in AI adoption, with Gen Z seeing AI as a collaborative ally rather than a threat, making them highly efficient. The role of cognitive endurance in success was emphasized, alongside India's leapfrogging approach to technological adoption. The speaker predicted that while India may not create most foundational AI infrastructure, it stands poised to become the world’s biggest beneficiary by innovating on top of AI platforms and integrating them deeply into day-to-day life. The country’s unique 'Indian tadka' to AI usage, rapid adoption, and application orientation were discussed as major factors paving the way for a new class of solo entrepreneurs and businesses leveraging AI for unprecedented scale and productivity.
- AI has transitioned from a mere answer tool to an integrated participant and 'companion' in both professional and personal spheres.
- Generational differences are pronounced: Gen Z adopts AI naturally and without fear, seeing it as a friend and enabler.
- Concrete use cases include AI-assisted meeting preparation, legal document analysis, email and personal communication, and as a 'thinking challenger' for business ideas.
- AI adoption is leading to higher work output, with teams expanding to tackle 10x more tasks, not shrinking.
- India is rapidly leapfrogging stages in technology adoption; AI learning and daily use are increasing faster in India than globally.
- Speaker forecasts the rise of 'solo unicorns'—single individuals running massive, automated businesses powered entirely by AI.
- AI is not dumbing down society but is exposing and amplifying the difference between those who think strategically and those who don’t.
- India likely to be the world's top 'second-order beneficiary'—gaining immense value not by inventing core AI tech, but by applying and innovating on top of these platforms.
- Cognitive endurance—the ability to sit with and solve tough problems—is identified as the common trait among world’s top performers.
- Unique Indian approaches to AI application and podcasting reflect a broader trend of customizing global innovations to local contexts.
AI for Energy: Digital Twins and India’s Energy Stack
The India AI Impact Summit 2026 session, co-hosted by the International Solar Alliance (ISA) and partners, spotlighted the transformative potential of artificial intelligence (AI) in optimizing energy systems, particularly in the context of India's rapidly expanding renewable energy infrastructure. The opening address by Mr. Karan Mangotra underscored the critical intersection of AI and solar energy, advocating for intelligent, inclusive, and citizen-centric energy transitions. A central highlight was the unveiling of the India Energy Stack (IEES), a digital public infrastructure designed to enable seamless interoperability across utilities, service providers, regulators, innovators, and consumers, inspired by the success stories of UPI, Aadhaar, and GST. Shweta Ravi Kumar’s presentation detailed IEES’s modular building blocks—covering identity, registries, protocols, credentials, and policy-as-code—which collectively aim to democratize energy access, foster innovation, and enhance livelihoods at scale. The session outlined a future where passive consumers become active “prosumers,” startups and local entrepreneurs drive new energy services, and AI agents personalize energy management for over a billion people. Partnerships spanning government ministries, global organizations, and knowledge partners were acknowledged as pivotal in shaping and deploying these initiatives. The session transitioned to a fireside chat on operationalizing this vision, emphasizing that these frameworks, built in India, are set to influence energy transitions globally.
- Over 400 GW of new solar capacity was added globally in a single year, with decentralized renewables accounting for 45% of 1,000 GW in the last two years.
- India’s cumulative solar deployment stands at 150 GW, but currently only 15% is decentralized.
- Introduction of the India Energy Stack (IEES): a digital public infrastructure for electricity, enabling secure, interoperable communication between all stakeholders.
- IEES is structured around five foundational building blocks: identity and addressing, registries and directories, interaction protocols, energy credentials, and policy-as-code.
- India plans to connect 300 million households via smart meters, with increased deployment of smart appliances, EVs, and distributed generation.
- IEES is modeled on successful national DPIs (UPI, Aadhaar, GST, ONDC), designed to support AI, enable seamless scaling, and unlock data for analytics and personalized services.
- AI is expected to be the intelligence layer atop the stack, supporting grid reliability, consumer empowerment, and new business models (e.g., prosumer settlements, local energy markets).
- IEES supports energy ‘agency’—enabling citizens, entrepreneurs, and self-help groups to actively participate and innovate in the energy market.
- The initiative is a collaborative effort involving the Ministry of Power, Ministry of New and Renewable Energy, Rural Electrification Corporation, FSR Global, ISA’s multi-donor trust fund, and knowledge partners.
- The Indian approach to digital public infrastructure is intended as a global model—'built in India for the world'.
Advancing AI Safety Across Languages, Cultures, and Contexts
The session at the India AI Impact Summit 2026 emphasized the urgent need for AI systems to advance beyond mere translation and address the intricacies of multilingual and multicultural safety evaluation. Speakers highlighted that AI models calibrated in one language, often English, can have materially different safety outcomes when deployed in other languages and cultural contexts. Real-world examples from Singapore's cross-country red teaming, Microsoft's investments in culturally grounded model evaluation, and academic research from India and elsewhere underscored the limitations of existing benchmarks and the potential exploits in low-resource languages. The panel collectively advocated for ecosystem-wide collaboration—spanning research, industry, and government—to develop rigorous, reusable, and locally sensitive evaluation methods. Initiatives like ML Commons' AI luminate benchmark, Microsoft's Project Gecko, and the formation of multicultural AI consortia were spotlighted as pioneering efforts. Ultimately, the session called for a holistic approach that can adapt safety standards to both evolving adversarial threats and diverse definitions of harm in different regions.
- Singapore led a cross-country red teaming effort involving nine Asia-Pacific countries to uncover regional variations in AI risk and harm.
- AI models exhibit different safety failure rates depending on language and cultural context, highlighting that safety measures in one language may not generalize.
- Fewer than 5% of world languages are meaningfully represented in AI training data, with English dominating at 42% of widely used datasets.
- Low-resource languages and non-Latin scripts are particularly susceptible to adversarial exploits due to insufficient testing and safeguards.
- Microsoft is advancing multilingual evaluation through support of ML Commons' AI luminate benchmark and local projects like Project Gecko in Africa and South Asia.
- IIT Madras (Sarai) is pioneering culturally grounded testing to surface locally relevant bias and stereotyping in AI models.
- Tokyo's GP center proposed a multicultural AI consortium to create shared evaluation infrastructure beyond translations, focusing on low-resource languages.
- Panelists identified that contextual and adversarial safety issues emerge uniquely in different settings, and that it is crucial to enable efficient local adaptations of AI safety standards.
- Singapore’s IMDA found that even direct dataset translations are inadequate for standardizing safety testing, spurring efforts to develop common taxonomies of harm.
Sustainable Digital Infrastructure Accord: Advancing AI Infrastructure in Asia-Pacific
The session at the India AI Impact Summit 2026 spotlighted India and the broader Asia-Pacific (APAC) region's emergence as global leaders in digital infrastructure, focusing intensively on the dual imperatives of rapid AI-driven data center growth and sustainability. Major stakeholders—including policy makers, industry leaders, and think tanks—emphasized the urgent need for sustainable practices as data center investments and energy demands soar, particularly to address the risks posed by massive power consumption and water usage. A centerpiece announcement was the forthcoming launch of the Sustainable Digital Infrastructure Accord (SDIA), a collaborative regional framework to set voluntary, measurable sustainability targets across energy efficiency, clean energy adoption, water management, and circular economy. Inspired by the European data center pact but tailored for APAC’s diverse conditions and regulatory landscape, the SDIA aims to harmonize industry sustainability standards and foster ongoing dialogue between government, industry, and civil society. The session underlined the existential importance of sustainability for the digital sector, especially in India, which is projected to account for up to 20% of global data usage and is investing heavily in both data center expansion and green energy. The panel stressed that without coordinated action, the region risks regulatory fragmentation, but with frameworks like the SDIA, India and APAC can anchor digital growth in robust, planet-friendly foundations.
- A strong focus on India and Asia-Pacific as emerging global tech and AI leaders, driven by rapid digital infrastructure development.
- The Sustainable Digital Infrastructure Accord (SDIA) will be launched in March or April 2026, aiming to set APAC-wide baseline industry commitments to sustainability.
- SDIA is a voluntary, collaborative framework involving 10 major colocation data center providers (covering 250 APAC data centers, with 42 completed and 15 under construction in India), hyperscalers, and regional governments.
- SDIA targets will be set across four areas: energy efficiency (e.g., PUE targets), clean/carbon-free energy use (renewables, nuclear, direct procurement), water management (addressing local water stress), and circular economy practices (waste heat management, embodied carbon).
- Data centers in India are projected to triple or quadruple electricity consumption by 2030, making India the APAC region’s second-largest data center power consumer.
- India's data usage could account for 15-20% of world total due to population and accelerating digital economy.
- India aims for 500GW of non-fossil energy by 2030 with 61% of energy needs met by renewables.
- A single hyperscale data center may use up to 1.5 million liters of water per day for cooling—a critical issue in water-stressed regions.
- The SDIA will provide a formal structure for ongoing government-industry dialogue, aiming to inform national and regional policy, set sustainability baselines, and avoid fragmented regulatory approaches.
- Session stresses the need to balance digital growth and sustainability, as AI and data center expansion can threaten environmental goals if left unchecked.
AI and Open Data | Unlocking Public Value and Impact at Scale
The session, 'AI as an Opportunity for More Impactful Open Data,' explored how artificial intelligence can revolutionize the accessibility, quality, and utility of open data—particularly public data curated by national statistical offices (NSOs). Panelists emphasized that while open data has undergone major evolutionary waves (from freedom of information to government data portals to private sector participation), the current era is defined by the convergence of AI and open data. AI's expansion presents opportunities for democratizing data access, improving data quality via tools like synthetic data or intelligent validations, and deploying conversational interfaces that allow broader, non-expert use of statistical datasets. At the same time, risks like a backlash against open data and the need for strong data governance and trust were highlighted. NSOs were cast as critical custodians, providing high-quality, structured, and ethically managed data that fuels trustworthy AI systems. The speakers advocated for NSOs to evolve from mere data producers to active stewards and orchestrators of national data ecosystems, underpinned by robust standards, transparency, and public-interest priorities. Investment in AI-readiness, data provenance, and data contracts were seen as strategic imperatives, with India’s NSO cited as a leading example of AI-powered innovation in official statistics. Overall, the session made clear that harnessing AI for public good in open data requires cross-sector collaboration, modernized institutional roles, and renewed commitments to transparency and data literacy.
- Stefan Host (co-founder of Grav Lab and Data Tank, NYU professor) outlined the 'fourth wave of open data,' where AI and open data intersect for transformative public use.
- Historical context: three prior open data waves were freedom of information, government data portals, and private sector contributions.
- AI enables new capabilities such as conversational data interfaces and synthetic data generation, making open data more accessible and filling data gaps.
- Data quality, provenance, and trust are critical: NSOs uniquely produce validated, standardized, and transparent data that is essential for reliable AI outcomes.
- AI can help automate data contracts and quality control, but risks include increasing backlash and data becoming less accessible, requiring vigilance.
- NSOs must evolve from passive data producers to integrated orchestrators of the national data ecosystem, balancing production with stewardship and regulation.
- NSOs' strengths include robust legal and ethical frameworks, neutrality, confidentiality protections, and expertise in maintaining longitudinal data series.
- India's NSO is recognized as a model for leveraging AI in statistical innovation—including projects with Google to use AI and satellite data for more timely population and housing statistics.
- Panelists called for investment in AI-readiness (e.g., updating data frameworks to be AI-ready), prioritization of public-interest AI use cases, and enhanced data and AI literacy.
Building High-Quality AI for Education: From Innovation to System-Wide Impact
The session at the India AI Impact Summit 2026 focused on building high-quality AI systems for education, particularly in low- and middle-income countries, with an emphasis on quality assurance, benchmarks, and scalability. Jonathan Stern of the Gates Foundation opened by highlighting that while effective educational interventions exist, leveraging AI to scale proven approaches requires robust ecosystem support and careful quality assurance. Romana from Fab AI detailed their approach: a multi-layered, cyclical quality assurance process covering everything from bias and safety to pedagogical relevance and contextual alignment. Notably, a mapping of 352 AI educational tools in South Asia and Sub-Saharan Africa showed that only 9% had any supporting evidence. Fab AI has developed the first pedagogical benchmark to test AI on real teacher exams and tracks model performance across languages and contexts, revealing significant drops (up to 15%) in accuracy for local languages. Benchmarking, impact evaluation, and rapid assessments are emphasized, as is co-creation with educators and rigorous measurement of outcomes and unintended consequences. ID Insight illustrated the importance of context-specific information and robust quality control when deploying AI at scale, sharing examples from healthcare where AI augmented service delivery while adhering strictly to local guidelines. The panelists underlined that effective AI for education demands continuous, context-aware measurement, rapid yet rigorous evaluation cycles, and close integration with local curricula and infrastructure, moving beyond mere availability of technology to demonstrable learning impact.
- Session focused on ecosystem and quality assurance for scalable, effective AI in education, particularly in South Asia and Sub-Saharan Africa.
- Mapping of 352 AI education tools found only 9% had any supporting evidence for effectiveness.
- Fab AI advocates a multi-layered quality assurance framework: global (safety, bias), educational (learning impact), technical (assessment capability), and context (curriculum, infrastructure, social norms).
- Fab AI developed the first pedagogical benchmark for AI, testing on real teacher exams and tracking performance in multiple languages.
- Model performance drops by an average of 15% when translated into major African languages, with even greater drops in smaller models and when using human vs. AI translations.
- Visual reasoning remains a key weakness in AI models for foundational numeracy; targeted benchmarking aims to address this.
- Impact evaluations are being conducted with partners like Google DeepMind; rapid cycles (8–10 weeks) enable lower-cost, faster assessment alongside traditional RCTs.
- 'What doesn’t get measured doesn’t get improved' is a guiding mantra: measurement and continuous evaluation are seen as critical at every stage.
- ID Insight shared health sector experience: integrating AI into service delivery (e.g., chatbots) can significantly increase efficiency but must strictly adhere to local, trusted guidelines.
- Effective deployment of AI in education depends on evidence-backed, scalable solutions that remain contextually and pedagogically relevant, not just technologically feasible.
AI for Education: Building Future-Ready Universities and Schools
This session discussed the evolving dynamics of Japan-India collaboration in construction, manufacturing, and technology, amidst Japan’s demographic challenges and global expansion strategies. Speakers highlighted the need for Japan to look outside its borders to sustain growth, particularly by leveraging Indian talent and partnerships. Despite Japan’s longstanding cultural and historical preference for Southeast Asian markets, recent high-level bilateral commitments and landmark Japanese bank investments in India signal promising capital flows and deeper integration. The panel recognized both the opportunities and cultural barriers, emphasizing that sustained investment and policy action are key to unlocking the potential of the India-Japan relationship in the coming years.
- Japanese mega-contractor and consultancy firms are increasing acquisitions and establishing innovation centers outside Japan, including in India.
- Japan faces significant demographic and scale-related issues, necessitating the import of talent, particularly from India, to sustain its industries.
- Japanese industry is supplementing robotics with human talent, recognizing that human interface remains crucial.
- Despite two standout Japanese companies (Suzuki and Daikin) in India, broader Japanese investment has lagged due to cultural and historical factors favoring Southeast Asia.
- Recent shifts in Japanese investment policy and strategic focus have improved, especially post-COVID and following supply chain disruptions.
- High-level commitments were made during the August 2025 India-Japan summit, but participants stressed the need for concrete follow-through via investments.
- Japanese banks such as SMBC and Mizuho have recently invested in Indian banks, signaling the start of increasing capital flows between the two nations.
- The next five years are projected to be especially promising for India-Japan relations, provided historical and cultural barriers can be addressed.
- Seven Japanese companies, including panel participants, are showcasing at the Japan Pavilion at the event.
APAC Centre for AI: Regional Leadership in a Global AI Economy
The opening session at the India AI Impact Summit 2026 convened a diverse panel of experts from AI startups, multinational corporates, policy think-tanks, academia, and international organizations to chart the future trajectory of AI in Asia-Pacific (APAC). The dialogue emphasized the urgent need for multi-stakeholder collaboration—spanning startups, established enterprises, academia, and government—to unlock the region’s immense AI potential while addressing domain-specific challenges. Panelists articulated that, similar to the fintech revolution a decade ago, AI will drive organizational efficiency and sectoral transformation without widespread job displacement. Key opportunities were identified in applied AI, especially within BFSI, healthcare, and public sector workflows, while notable obstacles include the scarcity of early-stage funding for deep tech and the necessity to build stronger corporate-startup partnerships. Enhanced bilateral cooperation, including reciprocal 'soft landing' incubation and improved market access across APAC countries, were spotlighted as crucial for startup scalability. The academics underscored the need for translational research labs—bridging the gap between fundamental research and commercialization—and suggested evolving metrics for research success to prioritize commercialization and entrepreneurship starting at the educational level. The session set the stage for policy-driven, inclusive AI ecosystem development, and highlighted India’s ambition to lead global startup growth by supporting AI innovation at scale.
- Announcement of an APAC-level AI center through a multi-stakeholder coalition (Core AAI in collaboration with PlugandPlay and GIFT City) focused on responsible AI innovation.
- PlugandPlay India highlighted running 600+ programs globally and their current engagement with fostering AI talent centers in APAC.
- Applied AI is gaining traction across BFSI, healthcare, and government sectors in APAC; focus is shifting toward solving business needs rather than technology alone.
- Startups face acute challenges in access to early-stage capital, especially for deep tech; call for greater involvement from both government and corporates.
- Proposal of reciprocal soft landing programs to facilitate APAC-wide startup entry and funding, addressing India's historical trade and collaboration imbalances.
- Emphasis on streamlining tariff and trade agreements to accelerate cross-border AI partnerships; reference to recent progress between India and the EU/UK.
- Academia encouraged to adopt commercialization and entrepreneurship metrics (e.g., companies and patents generated, not just papers published); mention of 200,000+ Indian startups now registered, aiming for #1 global ranking.
- Translational research labs cited as a trend to bridge academic research and market needs (TRL 2/3 to TRL 7/8/9), now included even in policy frameworks.
- Educational reforms suggested to introduce entrepreneurship at earlier stages and align research goals with national priorities like healthcare, agriculture, and workforce development.
India–Japan AI Partnership: Collaborating for Global Impact
The opening session of the India AI Impact Summit 2026 featured keynote addresses from leading representatives of Japan and India, showcasing recent policy shifts, collaborative initiatives, and infrastructure developments in the AI sector in both countries. Japan's METI highlighted national strategies for accelerating AI industrial development, including the deployment of advanced computing infrastructure, support for generative AI innovation through the GENIAC Accelerator, and the enactment of a comprehensive AI Act aiming to create a globally competitive, innovation-friendly regulatory environment. India outlined impressive growth in its AI sector since launching the India AI Mission in March 2024, moving from 7th to 3rd place on the global AI Vibrancy Index. The Mission's multifaceted approach was detailed, including substantial subsidies on computing resources, the AI Kosh data set platform, innovation centers for indigenous LLMs, and an emphasis on responsible AI. The panel then moved to discuss the strengths of each country and strategies for fostering sovereign AI, with representatives from both nations and companies like Fujitsu underscoring the complementary strengths of Japan's technical and engineering legacy and India's talent pool. The event set a cooperative tone aimed at leveraging mutual strengths for joint advances in trustworthy, scalable AI, while fostering ongoing partnership and knowledge exchange.
- Japan is accelerating AI industrial development via strategies that integrate foundation models, domain-specific AI, and unique industrial datasets.
- Key Japanese initiatives include the deployment of national data centers, reinforcement of the semiconductor supply chain, and collaboration projects (e.g., ABCI) to enhance computational capabilities.
- The GENIAC (Generative AI Accelerator Challenge) initiative, launched in February 2024, aims to subsidize compute access, foster collaboration, and strengthen the AI developer community.
- Japan enacted its AI Act in June 2025, emphasizing innovation-friendly regulation aligned with international norms, safety, and transparency.
- Japan launched the Hiroshima AI process at the 2023 G7 summit and published responsible AI guidelines for business in April 2024.
- India's AI sector surged from 7th to 3rd place globally on the AI Vibrancy Index post the March 2024 launch of the India AI Mission.
- The India AI Mission supports access to 38,000+ subsidized GPUs, a data platform (AI Kosh) with 9,000+ datasets, and open-source tools.
- India focuses on developing indigenous foundation models, providing compute infrastructure subsidies (over 40%), and fostering a skilling pipeline from annotation to PhDs.
- Dedicated pillars cover innovation, startup support, responsible AI (Safe and Trusted AI), and government-driven problem-solving challenges via hackathons and competitions.
- Fujitsu Research has 400 out of 1,500 global researchers based in India, highlighting deep integration and reliance on India's AI talent.
- The theme of sovereign AI was explored, emphasizing Japan's engineering legacy and security focus and India's research ecosystem and talent capacity as complementary strengths.
- The session emphasized the need for Japan and India to work together on sovereign, secure, and scalable AI systems, leveraging their respective strengths.
Fair AI Supply Chains: Building Safe and Trusted AI Systems
The session, hosted by the Fairwork project—a joint initiative between University of Oxford's Internet Institute and the Berlin Social Science Center—explored the often invisible yet crucial human labor underlying the AI supply chain. Presenters highlighted how millions of workers, predominantly in the Global South, engage in data labeling, moderation, and annotation work under poor, exploitative, and commodified conditions. The panel addressed the pervasive lack of corporate and governmental incentive to improve these labor standards, given the ease with which companies can shift operations to regions with less stringent regulations. Fairwork has developed a universal rating and certification framework based on five key principles: fair pay, fair conditions, fair contracts, fair management, and fair representation. By assessing over 826 companies across 41 countries, Fairwork has driven more than 427 documented positive changes benefiting over 16 million workers worldwide. The session emphasized the need for lead AI firms, mostly based in the Global North, to move beyond supplier-level improvements and enforce fair labor standards throughout their supply chains, prompted by growing regulatory pressure, reputational risk, and operational risks tied to poor working conditions. Fairwork introduced its certification scheme as a scalable tool to support due diligence and drive global labor improvements in AI supply chains, shifting focus toward certifying the practices of leading firms and embedding minimum labor standards across their networks.
- Fairwork is an action-research initiative jointly led by the Oxford Internet Institute and Berlin Social Science Center, focusing on accountability in the digital economy.
- Fairwork operates on five principles: fair pay, fair conditions, fair contracts, fair management, and fair representation.
- Over the past six years, Fairwork has assessed 826 companies in 41 countries, resulting in 427 documented business practice changes and impacting over 16 million workers.
- AI-related labor—such as data annotation and content moderation—is typically invisible, deskilled, and highly surveilled; most jobs are located in poorer countries with little incentive for improvement.
- Governments in key outsourcing countries prioritize job quantity over job quality, allowing exploitation to persist.
- Lead AI firms in the Global North have the leverage to enforce better labor standards but often lack incentives to take action.
- Three drivers are compelling AI firms to address labor standards: emerging due diligence regulations (e.g., the EU's Corporate Sustainability Due Diligence Directive), reputational risks from public exposure, and operational risks due to labor dissatisfaction.
- Fairwork's new certification scheme goes beyond rating suppliers, aiming to engage leading firms to embed fair labor standards in their procurement and supply chain processes.
- Fairwork's frameworks and scorecards serve as benchmarks, enable company engagement, and help drive best practices in global AI labor markets.
Trust as a Global Imperative: How to Make Safe AI Work for Everyone
The session at the India AI Impact Summit 2026 focused on practical approaches for operationalizing AI safety through responsible, ethical, and inclusive principles. Leveraging insights from global leaders in AI governance, including those behind the UNESCO framework on AI ethics, the panel highlighted the importance of embedding trust, human rights, and transparency into AI policy and deployment. Panelists emphasized the need for actionable, country-specific policies that move beyond high-level commitments to tangible outcomes, using public procurement and participatory governance as key levers. The summit's unique positioning in the Global South, specifically India, underscored the urgency of closing the global gaps in AI infrastructure, representation, and impact, while ensuring communities most affected by AI are involved in its governance. The discussions reinforced that trust—grounded in transparent, inclusive, and rights-respecting systems—is foundational to realizing the benefits and averting the risks of AI in a rapidly evolving and unequal global landscape.
- UNESCO's Recommendation on the Ethics of AI, a global human rights-based framework, has been adopted by 193 nations.
- Global south countries can shape AI impact through tools like public procurement, mandating ethical and safety standards in contracts (e.g., allocating 15% of budgets to procurement with strict AI requirements).
- India highlighted as a proof point for responsible AI adoption, citing initiatives like Aadhaar, India Stack, and plans for developing indigenous AI models.
- Significant disparities persist: Africa has 18% of the world’s population but less than 1% of data center capacity, demonstrating the uneven access to AI infrastructure.
- Participatory frameworks, involving communities in the design and governance of AI tools, are critical for building sustainable and trusted AI systems.
- Trust is identified as the most crucial currency in AI’s societal integration; it cannot be mandated by policy alone, but sustained through transparency, inclusiveness, and demonstrable safety practices.
- Addressing gender inequality remains pivotal: women are underrepresented in AI design and decision-making, but overrepresented among those affected by cybersecurity threats and harassment.
- Despite numerous summits and commitments, the underlying architecture and power balance in global AI development and deployment remains largely unchanged.
- Concrete safety operationalization requires investments, legal frameworks, and institutional reforms tailored to each nation's strategic development goals.
AI for Financial Inclusion: Fraud Prevention in BFSI
The session at the India AI Impact Summit 2026 focused on the rapid evolution and scaling of India's digital financial ecosystem, emphasizing the foundational role artificial intelligence (AI) must play as infrastructure rather than just a tool. Leaders discussed how India’s digital rails, exemplified by UPI processing over 20 billion transactions monthly, now operate at a scale demanding real-time, adaptive fraud detection and risk management solutions. Traditional rule-based systems are increasingly inadequate to handle the sophistication and speed of evolving fraud tactics, making AI critical for in-flight intelligence, continuous risk assessment, and detection of behavioral anomalies and mule networks that static approaches miss. The discussion also highlighted the Indo-Singapore partnership: India contributes scale, diverse datasets, and technological talent, while Singapore adds regulatory rigor and governance. Key frameworks for AI governance in BFSI (Banking, Financial Services, and Insurance) were presented, notably the 'JCT' (Justifiable, Contestable, Traceable) paradigm, ensuring decisions are explainable to customers, auditors, and regulators, and preserving trust while reducing friction and exclusion. Panelists stressed the need for mindset shifts in scaling AI, improved data readiness, and the importance of embedding rigorous and respectful guardrails to prevent unintended societal or customer biases. The future vision includes AI reducing onboarding friction, supporting MSMEs, enabling intelligent customer authentication (such as 'street mode'), and responsibly scaling digital financial services with explainability and accountability at the core.
- India’s digital finance infrastructure (e.g., UPI) processes over 20 billion transactions monthly, having doubled in the past 2-3 years.
- Projected financial fraud losses could exceed 1 lakh crore INR without advanced, scalable interventions.
- AI must shift from after-the-fact detection to real-time ('in-flight') intelligence for genuine risk reduction at scale.
- Rule-based, legacy fraud controls are insufficient at population scale; AI-driven adaptive systems are needed to handle rapid and complex fraud vectors.
- AI can reduce false positives, lower onboarding friction, and minimize exclusion, especially for first-time users and MSMEs.
- The Indo-Singapore partnership leverages India’s technology and data scale with Singapore’s strong regulatory and governance expertise.
- ‘JCT’ (Justifiable, Contestable, Traceable) framework ensures AI decisions in BFSI are explainable and accountable to customers, auditors, and regulators.
- Guardrails such as ‘PURE’ (Purposeful, Unbiased, Respectful, Explainable) help address societal biases and ensure ethical AI deployment.
- Innovative features like ‘street mode’ use context and location-aware AI authentication to add security without excessive friction.
- Scaling AI is as much a mindset and governance challenge as a technological one; success depends on organizational willingness to tolerate and learn from early errors.
- Endorsed the transition of AI from successful pilots to scaled, auditable, and privacy-compliant systems.
National AI Strategy for Health: Vision, Policy, and Impact
The panel discussion at the India AI Impact Summit 2026 brought together key public and private sector leaders to examine the current state and future direction of AI adoption in India's public health sector. Under the chairmanship of Poonam Shastri, Secretary, Ministry of Health and Family Welfare, the session highlighted the digital transformation journey of the Indian health system, emphasizing scalable, responsible AI integration to improve service delivery, reduce the burden on the health workforce, and democratize healthcare access. The conversation touched upon successes including the massive scale of the Ayushman Bharat Digital Mission (ABDM), AI-powered disease detection initiatives, and primary healthcare upgrades. Speakers underlined the need to move from pilot projects to ecosystem-level implementation, ensure interoperability via open standards, and promote robust public-private partnerships. Challenges such as system interoperability, procurement, capacity building, and supportive policy frameworks were examined, with a strong focus on maintaining inclusion, equity, and trust as AI becomes entrenched in public health delivery. The session set a tone of collaboration and continuous improvement towards a mature, transformative AI-enabled health ecosystem by 2030, aligned with India’s vision for Viksit Bharat 2047.
- 859 million citizens have Ayushman Bharat Health Accounts (ABHA), with over 878 million health records digitized nationwide.
- India now has 180,000 upgraded Ayushman Arogya Mandirs (primary health centers) and has facilitated 449 million telemedicine consultations via e-Sanjeevani.
- AI-enabled solutions like Madu Netra (for diabetic retinopathy), handheld X-ray detection for TB, and AI-powered media disease surveillance have been deployed at significant scale.
- More than 7,000 patients benefited from diabetic retinopathy screening across 38 facilities; over 1.6 lakh individuals screened for TB using AI tools.
- Three Centers of Excellence for AI in Healthcare have been established at AIIMS Delhi, PGIMER Chandigarh, and AIIMS Rishikesh.
- Frameworks promoting open standards, interoperability, and responsible AI, especially generative AI, are being advanced through ABDM.
- The panel stressed the shift from pilots to sustainable, ecosystem-wide integration, emphasizing the role of both public and private sectors.
- Procurement, governance, and institutional pathways are under scrutiny to ensure smooth, scalable AI adoption within diverse state contexts.
- Digital public infrastructure and AI are positioned as tools for inclusion and equity, central to India’s health transformation goals for 2047.
AI & the Future of India’s Tech-Enabled Services Sector | India AI Impact Summit 2026
The session at the India AI Impact Summit 2026 focused on the transformative effects of AI on India's IT and BPO sectors, addressing the critical shift from labor-based value to innovation, domain expertise, and AI-enabled orchestration. Panelists discussed the assertion, notably referenced from Vinod Khosla, that traditional BPO and IT services may largely disappear within five years due to AI, but highlighted that India's accumulated process knowledge, domain expertise, and integration skills position it to transition from labor arbitrage to higher-value services. The conversation emphasized that while routine, low-end roles are at risk, the country’s $280 billion IT sector, employing around six million people, is not solely dependent on cost advantages. Instead, India's future competitiveness will rely on its ability to rapidly upskill talent, embrace agentic AI, and foster a culture of innovation and productization. Structural shifts required include integrating product-focused teams at the strategic level within service firms and transparently committing to transformative timelines. The panel agreed that the evolution offers an opportunity for India to capture greater global market share in tech services if managed proactively, pivoting from resource-based operations to orchestrating technology-driven business outcomes.
- Vinod Khosla predicted at the summit that BPO and IT services in their current form could disappear in 4-5 years due to AI advancements.
- India's IT sector currently employs about 6 million people and generates nearly $280 billion in revenue.
- Over the past 25 years, the sector has experienced approx. 55x growth in revenue and a 20x growth in employment.
- India holds about 20-25% global market share in IT/BPO services, with total addressable market estimated at $1.8 trillion+.
- AI is expected to most dramatically impact lower-tier, routine roles, necessitating a shift toward specialized and orchestrative work.
- Panelists asserted that India’s competitive advantages increasingly stem from deep process and domain expertise, not just labor cost.
- Structural changes required for Indian IT firms include: 1) embracing innovation and product-oriented cultures, 2) integrating product leaders into core strategy (‘seat at the table’), and 3) openly communicating their transformation journey to markets and stakeholders.
- India's National Skill Development Corporation (NSDC) is identified as a critical player in retraining and upskilling the workforce.
- Agentic AI and orchestration of AI solutions are seen as the next areas of significant opportunity, alongside continued trust relationships and systems integration demands from global clients.
AI for the Last Mile: Human-Centred Design for Bharat
The session at the India AI Impact Summit 2026 brought together leading government officials, venture capitalists, behavioral scientists, and AI entrepreneurs to discuss AI's potential to bridge or widen India's social and economic divides. The discussion centered on the real-world deployment of AI across sectors such as agriculture, education, healthcare, and justice. Telangana's pioneering role in making government data openly accessible to innovators was highlighted as a catalyst for AI-driven solutions. Challenges consistently emerged around data quality and availability, scaling pilots, compute resource requirements, and the need for continuous monitoring and user-centric trust and safety measures. Case studies, such as Rocket Learning's AI-driven ed-tech platform and Weissa's multilingual mental health bots, illustrated practical successes and challenges in AI scaling. The conversation underscored the uniqueness of Indian nonprofit AI innovation and the global potential of locally developed solutions, while emphasizing the necessity for new capital and governance models to ensure responsible and scalable AI impact.
- Telangana launched the Telangana Data Exchange Platform, offering 1,100+ open datasets across sectors to AI innovators, following earlier success with an agriculture data exchange.
- Data quality and availability remain the biggest bottleneck for AI projects from inception to scaling, often leading to multi-month delays.
- Rocket Learning's AI-driven education platform now reaches millions of children and hundreds of thousands of Anganwadi workers, leveraging WhatsApp for lesson delivery and parental engagement.
- Rocket Learning’s dual emphasis on rigorous product design and embedded research, including randomized control trials and A/B testing, led to significant iteration and scaling.
- The need for AI solutions tailored for 'Bharat'—India’s unique context—was stressed, distinguishing from Silicon Valley-style approaches.
- Scaling AI pilots faces new challenges: high up-front compute costs, cross-state data/environment fragmentation, and capital allocation issues, especially in less-monetizable sectors.
- Continuous evaluation and monitoring are critical in AI deployments, surpassing traditional software update cycles.
- Donors and philanthropic capital must adapt to the higher initial compute and development costs associated with AI solutions.
- Homegrown Indian AI nonprofit tech innovations have potential for global adaptation and impact, particularly in other large, developing countries.
AI and Media: Opportunity, Responsibility, and the Road Ahead
The opening session of the 'AI and Media' track at the India AI Impact Summit 2026 convened top leaders from India's preeminent news organizations to discuss the transformative impact of artificial intelligence across the media sector. Panelists acknowledged AI's growing role in content creation, curation, and monetization, while emphasizing that in a complex and diverse media ecosystem like India's, human editorial judgment, institutional memory, and trust remain paramount. Media leaders described AI as a valuable tool for improving content depth, automating repetitive tasks, enriching context through archival intelligence, and enhancing retention and revenue models, but agreed it should augment, not replace, journalistic responsibility. They warned of the dangers of 'AI slop,' ethical challenges, and the difficulty in distinguishing between authentic and machine-generated content—especially critical in a country with wide disparities in literacy, language, and affluence. The panel called for a responsible, human-centric approach to integrating AI in newsrooms and urged cooperation among media owners, policymakers, and technology companies to safeguard public trust, accuracy, and societal well-being as AI adoption accelerates.
- The session brings together major Indian media leaders—Times of India, India Today Group, Dainik Bhaskar, Amar Ujala, The Hindu, and international experts—to examine AI's impact on the industry.
- AI is already transforming how information is gathered, curated, disseminated, and monetized in India's largest news organizations.
- Trust, accountability, and editorial discretion remain foundational; AI should serve as an aid, not a substitute, for journalistic integrity.
- News organizations like India Today and The Hindu shared practical AI applications: AI anchors, voice cloning, 'AI sandwich' workflows, contextualization using archives, automation of repetitive tasks, and predictive models for retention and engagement.
- 45% of surveyed Indians recently could not distinguish between AI-generated and human-made content, posing new risks for information integrity.
- India produced 200,000 hours of video content in 2025, the largest globally, with AI playing a growing but still limited role.
- Panelists likened AI to nuclear technology—powerful yet requiring careful, responsible management, with the potential for both great value and harm.
- They called on media owners, policymakers, and technology platforms to work together to combat misinformation and ensure responsible use of AI for the nation's future.
Future of Work in the Global South: Skills, Mobility, and Opportunity
The session at the India AI Impact Summit 2026 focused on the challenges and opportunities of AI adoption and workforce skilling in the Global South, especially in India and South Asia. The panel comprised leaders from investment, policy, tech, and international organizations, highlighting the urgent gap in AI skills between the Global North and Global South—where AI adoption is reportedly twice as high in the North. Key themes included the critical need for AI skilling and retraining programs that go beyond technical coding to encompass data literacy, ethics, adaptability, and lifelong learning. Panelists emphasized the necessity for inclusive policies that address not just full-time employees but also platform, informal, and temporary workers. Research shared showed that lack of skills is a top barrier to AI adoption: 40% of firms in manufacturing and finance, and 50% of SMEs, cited skills shortages as blockers. Investments in training correlate with improved job quality and performance, and modular, flexible, and portable learning credentials can boost mobility and opportunity. India’s leadership with policies like the India AI Mission (2024), Sham Shakti Niti (2025), and DPI-based approaches was recognized as a potential blueprint for the region. The session also launched the “AI for All Workforce Skilling Policy Toolkit” to help design, implement, and evaluate skilling initiatives, acknowledging both demographic challenges (e.g., 54% of South Asian youth lack adequate job skills) and the need for systems-level—not just individual—solutions. Collaboration between government, industry, and multilateral institutions was flagged as essential to scale equitable AI opportunity and social mobility.
- AI adoption in the Global North is 2x higher than in the Global South; skills gaps are a key barrier.
- India’s AI policy leadership includes India AI Mission (2024), Sham Shakti Niti (2025), and the DPI approach, serving as regional models.
- Newly launched 'AI for All Workforce Skilling Policy Toolkit' supports designing, implementing, and measuring broad-based skilling initiatives.
- Research shows 40% of manufacturing/finance firms and 50% of SMEs are unable to adopt AI due to lack of workforce skills.
- AI-driven skill demand is less about coding (only ~1% of workforce) and more about data literacy, ethical understanding, and ability to use AI tools.
- Formal skilling policy must address informal and platform workers, not just traditional full-timers.
- Half of workers receiving AI-related training report better job quality and performance.
- Transferable, modular micro-credentials are needed to boost job mobility across borders and sectors.
- 54% of South Asian youth lack skills for decent jobs, threatening the demographic dividend.
- Panel stresses the need for system-level approaches and international collaboration to close the AI skill and opportunity gap.
AI & Arthik Shakti: A Blueprint for Women-Led Prosperity
The session at the India AI Impact Summit 2026 brought together leading voices from national and international organizations, including the National Commission for Women (NCW), UNITAR, ITU, law enforcement and creative sectors, to discuss AI's potential and challenges for women's empowerment. The dialogue revolved around bridging the gender digital divide, standardizing AI development, ensuring online safety, upskilling women, addressing AI bias, and fostering creative and economic opportunities through AI. UNITAR announced the development and deployment of over 250 AI-related standards, with another 250 in progress, to enable easier adoption and innovation, especially for vulnerable populations such as women. The ITU highlighted the launch of its Innovation Center in Delhi, organizing events to democratize AI-driven solutions (like AI-enabled X-ray analysis in rural health). Experts stressed the need for governance, impact assessments, and support systems to combat gendered cyber-risks, ensure equitable participation, and harness AI for women's entrepreneurship and leadership, especially in rural India.
- UNITAR has developed over 250 international AI standards, with around 250 more in the pipeline, facilitating easier and more inclusive AI development.
- The ITU Innovation Center has launched in Delhi to foster AI innovation and organize the 'innovation cafe' series, bringing cutting-edge solutions like AI-driven X-ray analysis to underserved communities.
- Persistent gender digital divide issues are acknowledged, with calls for policy interventions, capacity development, targeted upskilling, and improved device access for women, especially in rural areas.
- NCW highlighted the importance of skilling, entrepreneurship enablement, digital marketing, mentorship, and online safety awareness for women, supported by partnerships with state women's commissions.
- Panelists underscored AI’s current male-centric biases (e.g., 91% of deepfake victims are women), the need for AI impact assessments, and institutional mechanisms for grievance redressal and safety.
- Emphasis was placed on the democratization of standards and knowledge-sharing so women can both use and shape AI, including in sectors like health and agriculture.
- The creative sector illustrated how AI streamlines workflows for women and young creators but flagged ongoing barriers related to bias, algorithmic opacity, and ownership.
Safe AI in Education: Practitioner Insights from the Global South
The session at the India AI Impact Summit 2026, hosted by the Vadwani School of Data Science and AI and IIT Madras’ Center for Responsible AI, brought together thought leaders spanning education, technology, policy, and philanthropy to discuss the safe and responsible integration of AI in education. Opening remarks emphasized the Center’s commitment to multi-disciplinary research and its leadership in setting ethical AI standards nationally. Swati, representing Khan Academy India, detailed the large-scale, teacher-directed deployment of the Khan Migo AI learning assistant, which incorporates strict safety and privacy features and flagging mechanisms for student welfare, emphasizing that safety by design must extend beyond development to thoughtful, supervised deployment. The panel identified universal student safety issues across regions, with robust safeguards needed irrespective of geography. An interactive poll revealed student dependency and reduced critical thinking as the most pressing risks of AI in education. Shini from Ohio State University highlighted the pervasive and complex challenges of AI in higher education, such as integrity in admissions, the changing nature of learning experiences, and concerns about cognitive decline due to increased AI mediation in student interactions. The discussion underscored that whilst AI holds promise to democratize educational resources, careful guardrails and human oversight are essential to ensure the holistic well-being and intellectual development of learners.
- IIT Madras' Center for Responsible AI is leading policy and technical research to formulate national guidelines for accountable and ethical AI deployment.
- Khan Migo, Khan Academy's AI learning assistant, has scaled to over 4 million global users (including 2 lakh students and 2 lakh teachers in India), prioritizing safety by design through curated content, data privacy, and teacher-flagged alerts.
- Deployment in India mirrors global best practices, affirming that AI safety standards must be universally robust, regardless of a nation’s development status.
- The most prominent safety risk of AI in education, according to a live audience poll, is student dependency and reduced critical thinking (chosen by around 70% of participants).
- Ohio State University introduced an AI fluency program covering 60,000 students across 100+ disciplines, aiming for universal AI literacy with stringent safety considerations.
- Challenges highlighted include the difficulty of assessing true student capabilities amidst widespread AI usage in applications and coursework, as well as the risk of diminished peer-to-peer learning and possible cognitive decline.
- Panelists advocated for stepwise, capacity-driven, and human-supervised AI deployments to maximize benefits and minimize risks, reinforcing the centrality of human oversight.
India is Ready for AI| Launch of the National AI Readiness Assessment Report
The opening session of the India AI Impact Summit 2026 marked the launch of India's AI Readiness Assessment Methodology (RAM) report, a diagnostic tool developed in collaboration with UNESCO and the Indian government. The RAM aims to provide a comprehensive, inclusive, and ethical framework for evaluating and guiding India's progress in AI adoption. The assessment, informed by five multi-stakeholder consultations across major Indian cities and input from over 600 participants, offers a grounded analysis of the nation’s AI ecosystem—highlighting India's strengths, such as a vibrant innovation landscape, strong digital foundations, and leadership in multilingual AI. It also identifies key areas for improvement, namely ethical data use, environmental sustainability, and the need for consistent inclusion of ethics in education. India's approach, as outlined by government leaders, emphasizes public-private partnerships, scalable models for AI compute and skilling, and a commitment to ensuring AI serves all citizens fairly and responsibly. The session underscored the importance of embedding ethical principles throughout the AI lifecycle, aligning AI policy with global standards, and making inclusion and trust central to India's AI journey.
- Launch of the India AI Readiness Assessment Methodology (RAM) report in partnership with UNESCO and the Indian government.
- RAM is a diagnostic tool for translating AI ambitions into actionable, responsible, and inclusive strategies.
- Assessment involved five consultations across Delhi, Bangalore, Hyderabad, and Guwahati, with input from over 600 stakeholders (government, startups, research, civil society).
- India accounts for 16% of the world’s AI talent and is making significant advances in multilingual AI and digital public services.
- Priority gaps identified: need for stronger ethical data use, integrating environmental sustainability into AI planning, and embedding AI ethics consistently in education and training, especially regionally and linguistically.
- Public-private partnerships form the foundation for AI infrastructure and skilling initiatives, with government enabling subsidized access and private sector building out compute capability.
- The India AI Mission is designed with flexibility, allowing midcourse policy corrections based on RAM findings.
- RAM sets out actionable recommendations and serves as a global playbook for responsible AI adoption, especially in the Global South.
- Key ethical principles (human rights, fairness, transparency, privacy, and accountability) must be embedded across the AI lifecycle.
- Recent policy frameworks (AI governance framework November 2025, technolegal white paper 2026) emphasize standards, procurement mechanisms, impact assessments, and the necessity of human oversight in AI deployment.
- Building public trust in AI is critical and depends on embedding ethical practices from the start.
- Inclusion—across region, language, gender, ability, and socioeconomic background—remains central to India's AI policy and deployment.
Cognitive Infrastructure for Sustainable and Resilient Futures
The session at the India AI Impact Summit 2026 explored the transformative yet complex intersection of AI and urban infrastructure, a sector comprising 12% of global GDP but lagging in digitization and productivity. Panelists discussed the paradoxes and opportunities as AI costs plummet and capabilities accelerate, yet the construction and infrastructure industry remains entrenched in legacy practices. Professor Stuart Russell warned of the irreversible consequences of implementing AI in critical physical systems and the broader societal risks of de-skilling and over-reliance on autonomous AI management. He urged that AI should augment, not replace, human oversight in infrastructure. The conversation further turned to the geopolitical and financial dynamics of sustainable infrastructure, stressing that the flow of capital toward AI-for-climate solutions remains inadequate and will not naturally align with global needs without active intervention. The panel concluded that leadership, conscious policy decisions, and genuine global cooperation are vital to ensure that AI contributes positively to resilient, sustainable urban development, and does not exacerbate existing shortcomings or create new systemic risks.
- Urban infrastructure and construction is the world's largest industry ($12 trillion, 12% of GDP, 7% of workforce) but is one of the least digitized sectors.
- Industry productivity has declined and operating margins (4.7%) lag far behind other sectors like the S&P 500 (17.5%).
- AI development costs are plummeting rapidly (training costs dropping from $100M to $12M in a matter of weeks).
- Despite advances in AI, industry information remains trapped in unstructured formats (PDFs, fragmented records).
- Professor Stuart Russell highlighted the severe, irreversible risks of AI failure in physical infrastructure, emphasizing the impossibility of 'beta testing' in this environment.
- Russell warned about 'collective de-skilling', where society loses critical know-how as AI systems take over complex tasks.
- Panelists stressed that AI must support human decisions in infrastructure management, rather than blindly automate or outsource control.
- Insufficient capital is being directed to AI solutions for climate and sustainability; market forces alone won't prioritize these needs.
- Post-Paris climate objectives are off-track, and achieving sustainable development goals requires intentional policy and investment shifts.
- The panel underscored the urgent need for deliberate leadership, international cooperation, and reframed incentives to align AI and infrastructure progress with broader societal and environmental goals.
AI for All: India’s Public-Interest Policy Architecture
The session at the India AI Impact Summit 2026, hosted by the University of Delhi's Department of Political Science, gathered an interdisciplinary panel including academics, policymakers, industry leaders, and representatives from international organizations to deliberate on the future of 'AI for All' in India's policy architecture. Emphasizing that artificial intelligence is now an embedded feature of institutional and social life, panelists underscored the necessity for AI to foster trust, uphold constitutional values, and serve broad public interests rather than merely driving technological advancement. Key themes included ensuring AI is inclusive, transparent, explainable, and accountable; bridging the digital divide; balancing closed versus open AI frameworks to prevent concentration of power; and increasing public investment in research and development to meet developmental aspirations under the 'Viksit Bharat 2047' vision. The session called for collaborative, reflective, and participatory AI governance that actively mitigates bias, supports multilingualism (with tools like Bhashini), and designs policy frameworks that democratize access and opportunity across sectors such as healthcare, agriculture, and finance. The discussion also highlighted India's lagging investment in AI R&D compared to global competitors, urging a mobilization of research, innovation, leadership, and inclusive economic policy to ensure equitable and impactful AI adoption.
- India's vision of 'Viksit Bharat 2047' is driving an urgent need to ensure AI development is inclusive, transparent, and accountable.
- Panelists highlighted the importance of bridging disciplinary silos, involving stakeholders from policy, technology, law, healthcare, economics, and global governance.
- The deployment of tools like Bhashini is making public services accessible through multilingual AI, especially benefiting non-literate and marginalized groups.
- Concerns were raised over AI bias, with calls for systems to mirror constitutional values rather than existing societal inequities.
- The debate between open AI (open source) and closed AI (proprietary platforms) focused on the trade-offs between inclusivity, security, and the risks of power concentration.
- India's current R&D expenditure is only 0.6% of GDP, starkly lower than global leaders, indicating a need for greater investment to foster domestic AI innovation.
- AI was identified as a means to reduce information asymmetry and the digital divide, provided public policy intentionally leverages these technologies for welfare.
- Calls were made for 'explainable AI' to ensure transparency, accountability, and empathy in both public and private sector applications.
- AI governance should be continuously participatory, reflecting the collective responsibility of technologists, policymakers, and the public.
- Public trust in AI-driven systems hinges on maintaining democratic values, inclusion, and robust ethical oversight.
AI for Financial Inclusion: Fraud Prevention in BFSI
The session emphasized AI's transformative potential for social equity, particularly in underserved communities and agriculture, while cautioning that the future direction of AI depends heavily on the governance frameworks established today. Drawing from India's digital journey, the speaker advocated for scalable, democratic, and interoperable technological ecosystems built in partnership between the public and private sectors. The importance of trust—supported by safeguards, redressal mechanisms, and sound data governance—was underscored as critical to AI adoption. The speaker highlighted how global technological advancement and geopolitical competition now intersect, making international collaboration on AI standards, ethics, and research imperative to avoid digital fragmentation. Capacity-building across policymakers, regulators, the judiciary, and citizens was presented as equally critical to legal frameworks. The guiding principles proposed for AI governance were human-centricity, inclusivity, and adaptability. Rule of law was re-framed as a catalyst for innovation and trust, not a constraint. Ultimately, the speaker called for democratic participation in shaping AI governance, ensuring that AI becomes a tool for empowerment aligned with human dignity and democratic values.
- AI can drive equity in underserved communities and optimize agriculture for small farmers.
- India's digital journey demonstrates large-scale innovation with democratic accountability via open standards and public-private collaboration.
- Trust in AI systems is to be built through safeguards, grievance mechanisms, and robust data governance.
- Geopolitical competition now intersects with technological advancement; collaborative AI governance and international standards are vital.
- Fragmented regulations risk creating digital silos and inequalities between nations.
- Capacity-building for policymakers, regulators, judges, and citizens is equally important as regulation for effective AI governance.
- Three guiding governance principles: human-centricity, inclusivity, and adaptability.
- Rule of law is foundational to building trust, enabling adoption, and sustaining legitimacy in AI, rather than hindering innovation.
- The future of AI governance depends on legislative, judicial, and democratic processes—not just technical or commercial domains.
- Aligning AI innovation with democratic values is necessary to ensure technology enhances both progress and human dignity.
The Governance Gap: Designing Global Standards for AI Advisory Boards
This session at the India AI Impact Summit 2026 focused on regulatory governance and the institutional design of AI advisory boards, emphasizing the complexities of regulating a rapidly evolving AI landscape. Panelists highlighted the uneven global development of AI, the diversity of regulatory attitudes, and the tension between government-led regulation and industry self-regulation. Julie, representing the Meta Oversight Board, outlined the board’s independent, binding, and diverse approach to content governance, grounded in a human rights framework. She emphasized the need for transparent and accountable advisory models that can be trusted by the public and industry alike. Saurabh, representing Saram AI, discussed how emerging Indian AI companies engage proactively with regulators and civil society, focusing on data quality, safety, and accountability in deploying AI systems. The discussion acknowledged India’s regulatory strategy, which seeks to balance innovation with strong accountability, demonstrated by recent binding regulations on deepfakes. Both panelists agreed on the importance of collaborative, adaptive governance models and internal accountability mechanisms throughout the AI supply chain to ensure responsible adoption while enabling technological advancement.
- Regulation of AI in India is being designed to be adaptive, focusing on balancing innovation with accountability rather than imposing a stifling regime.
- India has recently implemented binding regulations specifically targeting deepfake content, signaling a firmer stance on AI-mediated harms.
- The Meta Oversight Board serves as an independent, global, and diverse advisory board making binding decisions on content moderation; 75% of its recommendations have been adopted by Meta.
- The board operates based on a rigorous human rights framework, covering rights such as free expression, privacy, and safety, with formal mechanisms ensuring independence and transparency.
- Indian AI firms like Saram AI prioritize collaboration with regulators and civil society, employ strong data curation practices, and establish internal guardrails to prevent misuse and ensure model safety.
- There is an emerging trend in Indian regulation to hold each entity in the AI value chain accountable based on their role, rather than placing full liability on a single party.
- Industry stakeholders and regulators are maintaining ongoing dialogue, allowing regulatory approaches to evolve in response to technological and business realities.
Operationalising Open-Source AI: Pathways to Digital Sovereignty
The session at the India AI Impact Summit 2026 centered on the relationship between open source AI, sovereignty, and regulatory frameworks, particularly in the Indian context. Panelists explored the limitations and opportunities presented by both proprietary and open source AI ecosystems, arguing that open source AI offers a greater—but not guaranteed—opportunity for nations, organizations, and individuals to assert technological agency and autonomy. With increasing centralization by a handful of global platforms and countries, the discussion underscored open source AI as a strategic avenue for fostering competition and reducing dependency risks. Policymakers and practitioners highlighted India's position as both a large AI market and a nascent creator, referencing landmark domestic initiatives like UPI and the Vishwam project as case studies. They stressed the need for a balanced regulatory approach that is strict in sensitive sectors (such as healthcare and education) but otherwise light-touch to encourage both domestic innovation and foreign collaboration. The panel concluded that realizing true AI sovereignty requires not just open source models, but investment in the entire technology stack—including compute infrastructure, governance systems, and ecosystem maturity—tailored to local contexts and needs, rather than a one-size-fits-all approach.
- Open source AI is seen as a key strategic option to counter centralization and preserve sovereignty but is not a universal guarantee; much depends on ecosystem maturity and active community investment.
- India's regulatory experiences, such as UPI (Unified Payments Interface) and product servicing via the software industry, serve as instructive models for shaping AI governance.
- The Andhra Pradesh government’s Vishwam project demonstrates regional leadership and pathbreaking work on open source AI in India.
- Policy recommendations include sector-specific regulation (e.g., stricter in healthcare, lighter in others) to promote innovation while safeguarding critical areas.
- India is producing vast AI-relevant datasets and tokens, creating the basis for large language models (LLMs) tailored to local needs.
- True AI sovereignty is multi-layered, requiring local control across models, compute infrastructure, data governance, and cultural/linguistic adaptation.
- Launching of Serbam AI and ongoing rollouts signal India’s commitment to more autonomous AI development strategies.
- Open source, while offering transparency and developer optionality, does not automatically ensure accountability or resilience—intentional ecosystem and capability-building are necessary.
- Sovereignty must be understood as extending beyond code; standards, compute, guardrails, and governance mechanisms are all critical.
The Intelligent Cloud: How AI Is Transforming the Cloud-Native World
The session, led by leaders from Mirantis at the India AI Impact Summit 2026, focused on the intersection of AI and cloud-native technologies, particularly Kubernetes, in contemporary enterprise environments. The speakers highlighted the explosive growth in Kubernetes adoption for AI/ML workloads, its emergence as a common control plane, and the complexity introduced by multicloud, multicluster deployments. With over 15.6 million cloud-native developers—52% active in AI/ML—the discussion underscored the need for robust, automated infrastructure and standardized tooling to efficiently manage AI operations at scale. Key challenges addressed included infrastructure fragmentation, regulatory compliance, GPU scheduling, and the operational difficulties of onboarding and scaling AI platforms. The team outlined how open-source communities, platform engineering, and evolving automation strategies are vital for overcoming these barriers and enabling the rapid iteration required in the age of intelligent applications.
- Session addressed the convergence of AI, cloud-native tools, and Kubernetes as the future operating system for infrastructure.
- Mirantis recognized as a pioneer in private cloud, now emphasizing open-source and platform engineering for AI/Kubernetes.
- Kubernetes now used by over 15.6 million cloud-native developers, with 52% (7.1 million) focused on AI/ML workloads.
- 36% of developers are already running AI workloads on Kubernetes, while 18% are in the developmental pipeline.
- Kubernetes provides autoscaling, GPU-awareness, cross-cloud deployment, and rapid innovation for AI workloads.
- Key infrastructure challenges: rising complexity from multicluster/multicloud, fragmented tooling, and lack of unified management.
- AI-specific challenges: lack of tooling standardization, operational inefficiencies, and long lead times for GPU provisioning.
- Regulatory and compliance requirements are increasing, mandating consistency and visibility across platforms.
- Open-source communities and composable, vendor-neutral infrastructure cited as essential for meeting the needs of AI-driven operations.
Open-Source Intelligence: The New Frontier of Climate Negotiations
The session provided an in-depth overview of the complexities in multilateral climate negotiations and showcased 'Negotiate COP', an AI-powered, open-source tool developed by the German Federal Ministries for enhancing equity and efficiency in UN climate talks. Dr. Anur Rabba Ghosh highlighted structural inequalities faced by delegations, particularly from less-resourced countries, emphasizing the critical need for tools that can democratize interpretive and analytical capacity. 'Negotiate COP' addresses these disparities by granting all negotiators fast access to and analysis of over 600 official UN climate submissions, offering features including summarization, position comparison, and an AI-driven portal chat for insight extraction. The tool, which does not collect user data, was tested during COP 30 and received enthusiastic interest, representing a model of how AI commons can function as equitable digital public goods in a multilateral setting. The panel stressed the importance of transparency, trust, privacy, and co-development to foster fairer negotiations and called for broad participation in further evolving such tools.
- Dr. Anur Rabba Ghosh discussed long-standing analytical and interpretive capacity gaps among country delegations in climate negotiations, especially for developing nations.
- 'Negotiate COP' is an open-source, AI-driven public good developed by Germany's Foreign, Environment, and Economic Cooperation Ministries along with GIZ as a whole-of-government initiative.
- The tool provides rapid access to, and analysis of, over 600 official submissions from COP 29 and COP 30, extracting key asks, fixed positions, and facilitating structured comparisons between parties.
- Negotiate COP uses language models to extract negotiation stances through action-word detection and offers real-time insights via a privacy-first platform powered by renewable energy and hosted in Germany.
- Features include a submissions explorer, position comparison tool, and a retrieval-augmented generation (RAG) chat system for targeted, sourced Q&A with the document corpus.
- The architecture comprises automated scraping of official UN documents, categorization via LLMs, and a transparent, open-access interface.
- The tool was launched and tested at COP 30 with widespread interest from delegations across the spectrum and is freely available to all at negotiatecop.org.
- Negotiate COP was designed with negotiator input and aims to reduce information asymmetry, build trust, and promote equitable access in international negotiations.
- No personal or sensitive data is collected, reflecting a strong commitment to privacy and trust-building.
- The session called for continued co-development and open participation to extend the model of AI-powered digital public goods across other multilateral domains.
India AI Impact Buildathon 2026 | AI for Social Good & Cyber Safety | India AI Impact Summit 2026
This session at the India AI Impact Summit 2026 showcased innovative Indian solutions for detecting AI-generated audio, with a focus on scalability, accessibility, and addressing the unique linguistic and infrastructural challenges of the Indian market. Anurag Manik presented 'Kartav', a REST API-based tool leveraging platforms like OpenAI Whisper and Gemini to analyze and differentiate between AI and human voices. The solution demonstrated significant accuracy, support for 8–10 Indian dialects, and flexibility, but faced scrutiny regarding REST API scalability, performance under low bandwidth, and reliance on foreign models. Judges urged a pivot toward developing or utilizing Indian-trained AI models and stacking for futureproofing and improved localization. The second finalist, 'Walker Penguins', introduced a lightweight (2 MB) convolutional neural network model converting audio to spectrum images for detection, trained on over 100,000 diverse samples—including regional languages—and designed for edge device deployment, achieving 98% accuracy and sub-5ms inference. The session closed with an emphasis on the importance of teamwork, collaboration, and building indigenous technology to address both local and global challenges in AI-generated audio fraud.
- Kartav tool uses OpenAI Whisper for transcription and Gemini (2.5 Pro/Flash) for analysis to distinguish between AI-generated and human voices.
- Plug-and-play REST API architecture designed to integrate with existing workflows; accuracy: ~90% for uploads, 75–80% for live recordings.
- Tested on 8–10 Indian dialects (including Chhattisgarhi, Tulu) with promising results.
- REST API reliance criticized for bandwidth issues (2G/3G prevalence in India) and potential overfetching/underfetching problems.
- Judges highlighted the need for solutions built on Indian LLMs/data stacks (e.g., Bhashini), with offers of support for localized model training.
- 'Walker Penguins' built a 2MB lightweight, edge-deployable CNN model (image-based audio spectrum input) with 98% accuracy and 5ms runtime.
- Walker Penguins trained on 100,000+ TTS/deepfake audio samples across 10+ regional languages using the ASVspoof dataset and manual samples.
- Concerns raised about sustained value amid rapid AI advances; emphasis on developing scalable, future-proof solutions and supporting Indian teamwork.
- Both solutions demonstrated by solo/small teams within 40–50 hours of development, leveraging global and open-source tools.
- Judges encouraged unifying participants into teams and prioritizing 'Make in India' technologies to combat AI-driven scams, especially those with cross-border implications.
Responsible AI in Social Welfare Delivery
This session at the India AI Impact Summit 2026 focused on the critical challenges, safeguards, and operational realities of deploying AI systems in India's extensive social welfare infrastructure. Panelists addressed the evolution of responsible AI—from early concerns around bias and access, to the emerging focus on the scalability and infrastructure required to support ethical AI at a national level. Real-world examples illustrated the severe consequences of AI errors in welfare delivery, such as wrongful exclusion and bureaucratic hardship for vulnerable citizens, underlying the importance of robust, rapid redressal mechanisms. The session underscored the need for both micro-level safeguards (transparency, explainability, and grievance redressal) and macro-level policy—including independent auditing and procurement standards. Enterprise representatives detailed internal governance mechanisms like diverse ethics boards and impact assessments, emphasizing the centrality of human dignity. The consensus highlighted five foundational principles for AI in social welfare: appropriateness, accuracy, availability, accountability, and alignment with human dignity—arguing that without these, technological efficiency is irrelevant. The session concluded by reinforcing a multi-stakeholder and governance-driven approach to ensure that responsible AI enhances rather than harms lives in the context of India's social programs.
- AI deployment in India's social welfare sector demands evolving safeguards: early focus was on diversity and bias, now shifting to infrastructural scalability and operational support.
- Major risks of AI in welfare—such as wrongful exclusion and bureaucratic hardship—translate directly into real-world suffering, as shown by multiple case studies of pensioners and welfare recipients being misclassified or denied benefits.
- Burden of proof for correcting AI errors often unfairly falls on citizens, with current redressal mechanisms inadequate to promptly resolve exclusions or errors.
- Panelists advocated for transparent, explainable AI processes with clear duty of care from social welfare providers, including simplified and rapid grievance redressal.
- At the policy and procurement level, robust assurance measures like independent audits and techno-legal standards are essential before and during AI deployment.
- Corporates like Adobe have instituted AI ethics review boards and multi-stage impact assessments, emphasizing diverse oversight and continuous feedback.
- Panelists agreed on five foundational principles for AI in social services: appropriateness, accuracy, availability, accountability, and alignment with human dignity.
- Without strong governance across technical, operational, and policy layers, the benefits of AI risk being outweighed by social harm.
- The session assembled insights from government, civil society, academia, and industry, reflecting a multistakeholder commitment to responsible AI deployment.
Why AI Evaluation Matters | Building Trust and Impact in the Social Sector
This session at the India AI Impact Summit 2026 brought together industry and government voices to address how AI is reshaping the Indian labor market, with a focus on both opportunities and challenges. Shri Kartik Naranchi, representing the private sector and recruitment industry, highlighted how automation is displacing many entry-level roles but also expanding access to credit, entrepreneurship, and productivity in traditionally underserved segments such as rural India and kirana shops. He advocated for a skills transformation—toward dynamic, stackable, and modular competencies—and stressed the importance of shifting from job-centric to task-centric paradigms, with greater emphasis on soft skills like empathy and judgment. Kartik also discussed the rapid scaling of new job categories such as AI trainers, especially in tier 2 and 3 cities, and the role of vernacular and accessible AI in democratizing opportunity. A.J. Sharma, from the Ministry of Labour and Employment, conveyed the government's positive outlook, pointing to labor force data that show rising workforce participation and decreasing unemployment amid advancements in AI and technology. The session distinguished itself by framing the challenge not as job loss, but as ensuring the expansion and democratization of opportunity, with institutions and leadership playing critical roles in shaping the AI-driven future of work in India.
- Significant automation is occurring in entry-level jobs across compliance, fraud detection, and other functions, leading to workforce reductions in some areas.
- The industry is witnessing millions of Indians still outside formal credit networks; AI presents opportunities to expand economic inclusion by lowering costs in rural banking and entrepreneurship.
- Manufacturing and logistics automation are not necessarily leading to job losses, but are poised to shift workers from low- to high-productivity roles, with government focus on scaling manufacturing.
- There are 12 million kirana shops in India; AI can help them reduce wastage, improve pricing, and manage inventory rather than centralizing power in digital platforms.
- AI impacts routine tasks but increases the value of human judgment, empathy, and creativity, signaling a shift from purely job-centric to task-centric employment.
- Three-pronged policy recommendations: (1) Make skills dynamic, modular, and stackable; (2) Emphasize task-centric and soft skill development; (3) Build institutional bridges between policy, platforms, and employers, with real-time labor market alignment.
- India's digital public infrastructure and linguistic diversity are global strengths; vernacular AI interfaces are necessary for full inclusion.
- Recruitment industry data: 6 crore registered jobseekers, 7 lakh jobs posted last year, and 9 crore job applications; 50-60% of job applications now come from tier 2 and 3 cities.
- New roles such as AI trainers/data evaluators have emerged, often accessible to non-graduates from smaller cities, earning Rs. 20,000-40,000/month.
- Key employer skill requirements: prompt engineering, AI-augmented problem solving, and the ability to critically evaluate AI outputs.
- AI is democratizing interview preparation, resume writing, and job application processes, making tools accessible at scale to all jobseekers.
- Government data from the Periodic Labour Force Survey and claims data reflect a consistent decrease in unemployment and an increase in labor force participation and Worker Population Ratio (WPR), despite AI adoption.
- The real choice for India is to shape AI for inclusive growth, rather than be shaped by it, maintaining focus on augmenting human capital and leveraging the country’s demographic dividend.
Genomics and AI for Global Health | Empowering the Global South Through Secure Data Access
The session at the India AI Impact Summit 2026 focused on the critical importance of developing AI-driven healthcare solutions tailored for the Global South, emphasizing the urgent need for equitable representation of southern populations in biomedical datasets and AI models. Speakers underscored how current solutions and datasets predominantly cater to the Global North, exacerbating inequities and limiting the effectiveness and accessibility of healthcare innovations for majority-world populations. India's robust digital infrastructure, vast population, and leadership potential were highlighted as enablers to drive globally relevant, affordable, and accessible healthcare technologies. A central segment of the session introduced 'Biovolt', an open-source, privacy-first data visitation platform developed by OpenMined Foundation. Unlike traditional data-sharing models that require data transfer and expose privacy risks, Biovolt enables collaborative analysis on sensitive biomedical data without moving or copying the data itself. Featuring strong encryption and decentralized computation, Biovolt empowers data owners in the Global South to retain control, actively shape research outcomes, and participate equitably in global scientific collaboration. The rollout of Biovolt, along with ongoing collaborative projects and new pre-publications, demonstrates the tangible progress being made toward data-sovereign, privacy-preserving, and inclusive AI-driven biomedical research.
- Only 15% of the world's population resides in the Global North, but current AI and biomedical models are largely trained on data from this minority.
- Large parts of the Global South, including India, are underrepresented in major datasets, leading to less effective or unsuitable healthcare solutions.
- The Human Genome Project 2 is striving to rectify biases by including more diverse global populations.
- India's digital public infrastructure (like Aadhaar) and skilled workforce position it to lead AI healthcare solutions for the Global South.
- The session introduced Biovolt, an open-source, privacy-first data visitation platform designed for secure global biomedical collaboration.
- Biovolt allows researchers to query and compute on sensitive health data without requiring it to be copied, moved, or uploaded, ensuring data privacy and sovereignty.
- The platform supports encrypted, federated, multi-party computation and analysis, facilitating equitable scientific participation and benefit sharing.
- Biovolt’s features include easy-to-use desktop utilities, end-to-end encryption, and compatibility with common analysis tools such as Nextflow and Jupyter.
- A new preprint describing Biovolt and its use in real-world cases, such as federated analysis of single-cell RNA sequencing and large clinical datasets, was announced as released on bioRxiv.
Democratizing AI for Social Good | India AI Impact Buildathon 2026 | India AI Impact Summit 2026
The session at the India AI Impact Summit 2026 featured the high-stakes pitch round of the AI Buildathon, in which the top six teams—split evenly between student and professional groups—competed for a share of a 4 lakh INR prize pool. Teams presented innovative AI solutions to a distinguished jury comprising leaders from Indian industry, academia, technology, and popular content creators. The first pitch spotlighted 'Davy Codes', a professional team who developed an AI voice detection system aimed at distinguishing between human and AI-generated voices, addressing issues like voice cloning and fraudulent calls. Their solution leveraged transfer learning with dual models (‘the ear and the eye’) and was trained on diverse datasets, including multilingual and noisy audio. The jury probed the team on technical nuances, false positive rates, real-world scalability, latency, and applicability in scenarios with hackers or human variance (e.g., illness). The team acknowledged current limitations and trade-offs, especially regarding accuracy over latency, and expressed intentions to enhance scalability and robustness. The rigorous pitch format, comprising a strict two-minute presentation and eight minutes of Q&A, underscored the summit’s focus on real-world impact, technical depth, and the viability of AI solutions in high-security contexts.
- Top six teams (three students, three professionals) shortlisted for the Buildathon’s final pitch round.
- Total prize pool of 4 lakh INR, split equally between student and professional categories (1 lakh for 1st, 60,000 for 2nd, 40,000 for 3rd in each).
- Distinguished jury panel included leaders from State Bank of India, Ministry of Education, MongoDB, Cell Technologies, a top tech content creator, and GUI’s Chief Strategy Officer.
- Pitch round format: 2-minute presentation + 8-minute Q&A, evaluated on problem clarity, innovation, technical strength, applicability, and thought process.
- First professional team, 'Davy Codes,' presented an AI voice detection system using ensemble models trained on 5,000 data samples to differentiate AI-generated from human voices, including multilingual and noisy audio scenarios.
- Technical tradeoff prioritized accuracy over latency, achieving ~200ms latency with focus on minimizing false positives.
- Jury questioned real-world robustness: handling of fraud, voice bots, cyber-security attacks, speaker illness; team highlighted strengths and acknowledged areas for further work.
- The session exemplifies the critical scrutiny and real-world expectation for AI solutions in financial and daily-life security applications.
Women Entrepreneurs in AI: From Implementers to Innovators
The opening session of the India AI Impact Summit 2026, hosted by the BRICS Chamber of Commerce and Industry (BRICS CCI), emphasized shifting from implementation to innovation by empowering women in the AI ecosystem. Underlining India’s leadership during its BRICS presidency, the summit highlighted the need for inclusive and responsible AI development. The BRICS CCI Women Empowerment vertical’s efforts to build a global platform for mentoring and supporting women entrepreneurs in AI were showcased, including the launch of 'Renueurship Globally' to foster a leadership pipeline. Distinguished speakers stressed moving women from mere users to creators, founders, and policy-shapers in AI. Data shared indicated that while women currently lead approximately 20% of startups and MSMEs, this number is steadily climbing, with AI rapidly being adopted across women-led ventures in finance, retail, infrastructure, and smart city technologies. The discussion also spotlighted the critical necessity for women's leadership in AI regulation and governance, especially in creating safe and trusted AI systems. The session set the tone for collaborative action, policy advocacy, and global mentorship to boost the role of women in shaping the future of AI.
- India leads the BRICS Presidency in 2026 with a focus on technology, digital innovation, entrepreneurship, and responsible AI.
- BRICS CCI Women Empowerment vertical launched 'Renueurship Globally' to mentor and create a worldwide leadership pipeline for women in future technologies and AI.
- Approximately 20% of startups and MSMEs are currently led by women as founders or co-founders—a number that is growing year on year.
- AI adoption by women-led enterprises is seen across fintech, retail, infrastructure, smart cities, smart mobility, and toll management systems.
- AI is viewed as integral infrastructure, not just a future technology, making women’s participation critical at all ecosystem levels.
- The summit’s theme, inspired by India's prime minister, focuses on 'democratizing AI'—emphasizing gender equality as central to this mission.
- There is a call for more women in AI regulatory and policy leadership, addressing current gaps seen in international scientific and regulatory projects.
- A diverse panel, including founders, investors, policy experts, and disruptors, is committed to shifting the narrative from women as implementers to role models and innovators in AI.
- Recognition of ongoing challenges, like women being underrepresented in AI research and regulation, along with calls for cross-border mentorship and collaboration.
Securing the Future: AI Security in the Age of Autonomous Agents
The panel discussion at the India AI Impact Summit 2026 brought together key stakeholders from industry, policy, regulatory, and international organizations to explore the crucial balance between AI innovation and regulation. Syed Ahmed of Infosys moderated a diverse panel including representatives from the Reserve Bank of India’s Fintech Department, UNESCO, and the Australian High Commission, each of whom shared perspectives on responsible and trustworthy AI. Discussions spanned real-life applications of AI, the ethical and regulatory challenges of autonomous agents, the implications of cultural and linguistic inclusivity, and the necessity of post-deployment oversight for AI systems. UNESCO’s universal ethical framework, RBI’s focus on human agency and explainability, the Australian government’s emphasis on transparency, and the recognition of bias and data quality issues emerged as central pillars in forming trustworthy AI governance. The panel underlined the need for inclusivity, ongoing monitoring, and principled frameworks as AI becomes more pervasive in society and industry.
- UNESCO’s 2021 Recommendation on the Ethics of Artificial Intelligence, adopted by 193 member states including India, provides a principles-driven, technology-agnostic policy framework for global AI governance.
- The panel included policymakers, regulators, and international representatives: Reserve Bank of India (RBI), UNESCO, and the Australian High Commission, highlighting a multi-stakeholder approach.
- RBI affirmed that AI agents cannot transact independently in financial systems—the underlying human/users remain accountable and are subject to explainability and creditworthiness evaluations.
- The Australian government highlighted transparency as a non-negotiable principle for trustworthy AI, balancing innovation and risk management for public safety and competitive advantage.
- UNESCO ties AI trustworthiness to human rights and dignity, placing human oversight at the center and advocating for strong post-deployment monitoring of AI systems to prevent misuse and unintended harm.
- The discussion emphasized that inclusivity—especially for underrepresented groups and languages—directly improves AI effectiveness and broadens markets.
- Bias, data quality, and a lack of post-deployment oversight were cited as critical challenges to AI trustworthiness, demanding continuous regulatory vigilance.
- Practical examples, such as consumer use of AI-powered photo sorting, were used to illustrate pervasive but often overlooked AI influences in daily life.
AI and Children: Turning Safety Principles into Practice
The session at the India AI Impact Summit 2026 focused on translating ethical principles for artificial intelligence (AI) into practical measures to ensure the safety, inclusivity, and empowerment of children. Co-hosted by FICCI and UNICEF, the event highlighted India's commitment to integrating ethical AI in education, governance, and child protection. Key dignitaries, including government officials and youth advocates, stressed the rapid expansion of AI in sectors touching millions of young lives across India—where one in three internet users is under 18 and there are over 250 million schoolchildren. Notable announcements included the integration of AI and computational thinking into the national school curriculum beginning from grade 3, the establishment of regulatory and governance frameworks to safeguard children's interactions with AI, and the launch of a nationwide AI challenge for youth aged 13–20. Prasidhi Singh, a 13-year-old UNICEF youth advocate, presented demands representing over 54,000 young people globally, calling for diverse data sets, strong data protection, educational investment, parental support, targeted upskilling, transparent AI in recruitment, and active youth involvement in AI lifecycle decisions. Government leaders reaffirmed their dedication to balancing opportunity and risk, sharing recent initiatives such as mandatory labeling of synthetically generated content and the AI governance technological framework. The session encapsulated India’s intent to lead in human-centered, responsible AI innovation, ensuring the next generation becomes not just AI consumers but creators and decision-makers.
- UNICEF and FICCI partnered for a special session on the ethical, safe, and empowering use of AI for children.
- India has over 250 million school-going children; nearly 1 in 3 internet users in India is under 18.
- India’s National Education Policy 2020 embeds AI and computational thinking into the school curriculum from grade 3, starting next academic session.
- Youth statement representing 54,000+ young people from 184 countries outlined eight critical demands for child-centric AI, including inclusive data sets, strong data protection, ethical standards, educational readiness, parental guidance, emerging opportunities, upskilling, transparent AI-driven recruitment, and deep involvement of youth in AI decision cycles.
- Ministry of Electronics & IT recently released new regulations requiring labeling of synthetically generated content, with strict protection policies for children’s online safety.
- A nationwide AI challenge for children aged 13–20 years was launched to boost creative AI participation.
- India ranks third globally in the Stanford AI Vibrancy Index, mainly due to youth engagement and talent.
- The session reinforces the need for translating AI governance principles into practical safeguards and capacity building, particularly for children as early AI adopters.
BioAI: Integrating Artificial Intelligence & Biology for Breakthrough Innovation |
The session at the India AI Impact Summit 2026 explored how artificial intelligence is revolutionizing biology and biomanufacturing, highlighting its pivotal role in accelerating vaccine development, next-generation therapeutics, and sustainable materials. Key stakeholders from academia, industry, and government discussed India's ambitious plans to expand its genomics data repository from 10,000 to one million genomes, establishing a foundation for disease modeling and personalized medicine. Panelists emphasized that AI is becoming the cornerstone 'language' of modern biology, enabling the creation of synthetic life, virtual labs, and AI-designed therapeutics such as antibiotics and nanobodies. The discussion acknowledged the paradigm shift in both research and manufacturing ecosystems where AI-driven solutions significantly increase precision, scalability, and sustainability. With India's rich biodiversity and large population, the country is poised to drive breakthroughs at global scale, while also integrating indigenous knowledge and further developing multidisciplinary talent to meet future demands.
- India plans to expand its national genomics database from 10,000 to 1 million sequences, supporting widespread phenotypic and disease modeling.
- AI-powered biomanufacturing was critical for delivering more than 2 billion COVID-19 vaccine doses in India.
- AI is enabling virtual labs where researchers can design novel molecules and therapeutics, reducing dependency on animal-derived antibodies.
- Government-supported 'BioAI hubs' are fostering collaboration between biologists, computer scientists, IT professionals, and industry.
- The chemical industry is set for a profound shift: AI-driven biofactories are being developed for biodegradable polymers, biobased chemicals, and carbon capture technologies, aiming to replace petrochemical-derived materials over the next 20-25 years.
- AI's role is likened to both a microscope and a telescope for biology, unveiling unseen patterns and facilitating cross-disciplinary integration.
- Next-generation medicine includes AI-guided genetic therapies and the synthesis of entirely new life forms—synthetic chromosomes and bacteria.
- There is an urgent call for investment in research, manufacturing, and ecosystem-building, with special emphasis on interdisciplinary education combining biology and computation.
- India's unique biodiversity presents a global opportunity for AI-driven knowledge discovery, including the scientific validation of ancient wellness traditions.
AI in Food and Agriculture: Transforming Systems from Farm to Fork
The session at the India AI Impact Summit 2026 addressed the challenges and gaps in deploying AI solutions for agriculture, particularly in the Global South and among smallholder farmers. Panelists emphasized the need to critically evaluate public expenditure on AI, focusing on value for money, safety, and inclusivity. Findings from a systematic review, conducted for the FCDO and led by Athena Infinomics, highlighted that most AI interventions remain at a pilot stage, with adoption hindered by lack of trust, low digital literacy, gender disparities, capacity building gaps, and insufficient localized governance frameworks. A key insight was the limited impact of current AI solutions on the most marginalized, especially women and smallholders, and a pervasive lack of reporting on data types, ground-truthing, and ethical considerations in academic studies. Government perspectives, notably from Jammu and Kashmir, illustrated the challenges of scaling pilots to institutionalized, inclusive impact in diverse agro-climatic and socio-economic contexts. The session concluded that robust, participatory design, better reporting standards, targeted policy frameworks, and genuine community involvement are essential for sustainable, equitable AI adoption in agriculture.
- A systematic review (2019-2024) surveyed 55 journals, 19-20 repositories, and 100+ pieces of gray literature, yet found few studies reporting on data usage, ethics, or ground-truthing for AI models.
- Most AI interventions in agriculture are at a nascent, pilot stage and lack sustained, scalable impact, especially for smallholder farmers.
- Significant adoption barriers include limited trust in technology, persistent digital illiteracy, and limited capacity-building initiatives.
- Gender disparities remain pronounced: AI solutions often exclude women, both as users (due to asset and digital access gaps) and as contributors to solution design.
- Current AI models tend to perpetuate existing inequalities rather than ameliorate them, especially in settings where community context and cultural factors are overlooked.
- Financial inclusion remains challenging, with smallholders and women often excluded from AI-enabled financial services.
- Absence of localized, agriculture-specific AI governance frameworks and policy standards is a key gap; need for sector-wide reporting checklists highlighted.
- Government experiences, such as in Jammu and Kashmir, affirm that pilots must be adapted to local diversity and institutionalized for true impact, underscoring the risks of a one-size-fits-all or pure tech-first approach.
Power, Protection, and Progress: Legislating for the AI Era
The session, 'Power, Protection, and Progress: Legislators in the AI Era', convened distinguished panelists from Indian and Israeli legislative and policy backgrounds to discuss the transformative impact of artificial intelligence on democratic governance, economic strategy, and societal trust. Key discussions emphasized that AI governance is as much a democratic and sovereignty concern as a technical one, and legislators need not master the technicalities but must understand AI's broader economic and societal impacts. India’s growing primacy in AI talent and data production is contrasted with its current lack of GPU compute capability and hardware sovereignty, which is seen as the critical bottleneck and a strategic autonomy challenge. The conversation also covered the necessity to democratize AI's benefits across socio-economic strata, concerns about youth and data privacy, and the importance of building trusted, evidence-based, and human-augmented legislative processes. Israeli experience highlighted the need for incremental, trust-focused adoption of AI in government to preserve public confidence in legislative institutions.
- Legislators and experts see AI governance as a question of democracy and sovereignty, not merely technology.
- India has produced about 20% of global data and boasts the world's highest AI skill penetration, but suffers from a compute (GPU) capacity gap due to monopolies in design (e.g., Nvidia), manufacturing (TSMC/Taiwan), and export controls (USA).
- The Indian parliament is actively discussing social media restrictions for under-16s due to concerns over data privacy, valuation, and impact on youth.
- AI's amplification effect means whoever controls compute and capabilities will shape 21st-century power, akin to the role of oil, gas, and steel in the 20th century.
- Proposals include: securing access to AI compute, diversifying supply chains, building domestic data centers and chip manufacturing, and using AI to augment—not replace—human knowledge, particularly for underrepresented groups such as artisans and farmers.
- Legislators emphasize equitable AI access to prevent opportunity gaps, as only a fraction of the population currently benefits from advanced AI tools.
- Israeli experience cautions against rapid, wholesale AI adoption in governance; recommends incremental use to maintain public trust and ensure evidence-based, strategic policymaking.
Empowering Communities in the Age of Advanced AI
The session at the India AI Impact Summit 2026 brought together leading experts and policymakers to discuss the intricate relationship between AI safety, sustainability, inclusion, and global development, with a focus on impacts for the Global South. Adam Glee of Fari opened by highlighting the risks of advanced AI misuse—including terrorist adoption, jailbreaking large language models for extremist activities, and deepfakes undermining democratic processes. Yan Talin underscored existential risks from unchecked AI automation and warned that societal benefits could be disproportionately accrued by AI-hosting nations and entities, leaving developing societies vulnerable unless they actively engage in governance and international diplomacy. Stuart Russell rejected the false trade-off between AI safety and developmental benefits, illustrating that real progress hinges on robust safety and trust, particularly for the Global South. Robert Up of UNDP detailed practical international initiatives: governments’ landscape assessments, capacity development programs, and the global 'trust and safety reimagination' initiative, including a Declaration on Responsible AI. Overall, the session emphasized that AI’s promise for sustainable development requires aggressive risk management, equitable governance, and international cooperation, urging the Global South not to passively rely on outside actors but to assert influence over AI’s trajectory.
- Live demonstration revealed current (and older) generative AI models can be jailbroken to assist extremist groups, illustrating urgent real-world risks.
- Deepfake technology is already being used to disrupt democratic elections, as seen in recent fake announcements during the Irish presidential campaign.
- Yan Talin warned that unchecked AI growth could result in profound societal shifts, potentially marginalizing humans in the economy and accelerating inequality between nations.
- The Global South was urged to leverage diplomatic, trade, and policy tools to influence frontier AI companies and ensure safe, inclusive progress.
- Stuart Russell argued there is no true trade-off between AI safety and developmental benefits—unsafe technologies lead to societal rejection and lost opportunities, referencing nuclear and aerospace disasters.
- Distinction made between large language model AIs and task-specific AIs (like AlphaFold); the latter are crucial for development without the risks posed by general-purpose systems.
- UNDP has completed AI landscape assessments in 20 countries (with 10 more forthcoming), aiming to tailor national strategies and build governmental capacity.
- The UNDP's 'Trust and Safety Reimagination Program'—with over 400 global entries and 17 teams selected—focuses on localizing AI safety solutions, including initiatives like Trustweave, Ushahidi, and Silverg Guard.
- Announcement of the Hamburg Sustainability Conference Declaration on Responsible AI to build global alignment among the development community on capacity, trust, safety, and inclusion.
The AI-DPI Nexus: The Future of Public Interest Technology
The session at the India AI Impact Summit 2026 brought together distinguished leaders from government, international organizations, development banks, and foundations to discuss the critical intersection of artificial intelligence (AI) and digital public infrastructure (DPI). Panelists underscored that the greatest value from AI will only materialize when robust, inclusive, and interoperable DPI is in place—enabling AI to address real societal challenges, especially for the previously underserved. Despite global excitement about AI's potential, deep challenges persist, including skills gaps within governments, misaligned priorities among ministries, and the risk of treating AI and DPI as siloed efforts. Concrete examples like Singapore’s Syncpass show the power of layering AI on DPI to scale public service delivery. The integration of AI and DPI promises a virtuous cycle enhancing inclusivity, efficiency, and personalized services, but it necessitates local capacity, sound governance, and deliberate policy choices. Panelists called for countries to urgently but thoughtfully lay down their digital foundations to fully leverage the AI opportunity while safeguarding sovereignty, trust, and user welfare.
- AI and digital public infrastructure (DPI) are recognized as inseparable for delivering large-scale, inclusive public services.
- DPI serves as the foundational 'rails' making population-level benefits from AI possible, particularly for historically excluded groups.
- A strong divergence exists within governments: tech ministries stress building DPI first, while other ministries are more fixated on immediate AI adoption—sometimes overlooking the prerequisite infrastructure.
- Skills and capacity deficits in government and partnerships with grassroots communities are persistent bottlenecks to effective DPI and AI integration.
- Singapore’s Syncpass, integrating AI with a government digital ID, now supports 41 million monthly transactions—demonstrating effective DPI and AI synergy.
- Both DPI and AI must be governed by principles of interoperability, openness, and inclusion, with local ecosystems and safeguards as essential complements to the technology.
- More than 100 DPI systems are under construction worldwide, with lessons from India’s Aadhaar and other major initiatives shaping best practices.
- AI layered on DPI aids inclusion for linguistically and digitally marginalized populations, offering opportunities for job creation, formal economic participation, and better service delivery.
- Effective integration demands deliberate policy choices, governance frameworks, and locally tailored capacity building, especially for data architecture and data sharing.
- Sovereignty, user trust, and safeguard mechanisms—such as grievance redress and privacy controls—are seen as non-negotiable foundations for sustainable DPI and AI deployment.
AI in Public Audit: Driving Transparency and Accountability
This session at the India AI Impact Summit 2026 focused on the transformative integration of artificial intelligence and machine learning within the Comptroller and Auditor General (CAG) of India's public audit systems. The technical presentation outlined a robust vision and detailed AI strategy framework, emphasizing four foundational pillars: embedding AI in internal audit operations, auditing AI systems deployed across government entities, building significant capacity and skills among staff, and cultivating an enabling infrastructure for secure and effective AI adoption. The session showcased ongoing and future projects including AI-enabled audit toolkits, capacity-building schemes for thousands of officers, piloting advanced large language models developed with academic collaboration, and the move towards comprehensive cyber-security audits. The subsequent panel discussion brought together leaders from academia, government, and industry to evaluate India's readiness for AI-driven public governance, especially concerning data quality, infrastructure scalability, and cross-sectoral partnerships. The dialogue highlighted India’s unique challenges of scale and data complexity, and reaffirmed the importance of collaborative, responsible adoption to ensure transparency, accountability, and trust in AI-powered public institutions.
- CAG of India has released an AI strategy framework built on four pillars: (1) embedding AI in audit/business operations, (2) auditing government AI systems, (3) capacity building, and (4) enabling infrastructure and R&D.
- More than 500 IT audit reports have been tabled in Parliament to date. Expansion into cyber-security audits is underway, with pilots begun in 2026.
- Workforce transformation includes upskilling: currently, 7,800 CAG officers are undertaking courses in data science, ML, AI, and cyber-security, with a target to train 5,000 more officers shortly.
- CAG’s AI toolkit leverages OCR, NLP, and ML for identifying duplicates and anomalies in beneficiary and procurement data, with purpose-built tools for non-technical users.
- A sovereign large language model is being developed for audit functions, with the goal of having 50% of contributors being CAG-trained officers now and reaching 90% in three years.
- Cloud infrastructure and secure ETL pipelines have been established for analytics on government data, with expanded use of drones, satellite imagery, and advanced data analysis.
- Human capital strategy leverages partnerships with IITs, academia, and data science professionals nationwide.
- Growing emphasis on collaborative governance, with explicit calls for industry, academic, and inter-governmental participation to ensure secure, responsible, and scalable AI deployment.
- Panel discussion confirmed India's distinct challenges around data scale, heterogeneity, and quality, but expressed confidence in leveraging existing government centers of excellence and data infrastructure for scalable AI adoption.
AI for Industry: Building Resilience, Innovation, and Efficiency
The session at the India AI Impact Summit 2026, spearheaded by leaders from the German Indian Innovation Corridor (JIIC), SAP, Siemens, Vahan.ai, and Cube, set an ambitious agenda for AI adoption in Indian and international industries. The discussions emphasized that AI's true industrial value lies in its potential to augment, rather than replace, human expertise—particularly in manufacturing, energy, logistics, and aerospace. With a focus on cross-border collaboration (notably between Germany and India), panelists highlighted a possible €3 trillion opportunity in AI-driven industrial innovation. SAP's recent survey, cited during the session, underscores rapid AI adoption in Indian enterprises: 23% of business processes are already AI-supported, projected to reach 41% in two years. Confidence is high among leadership, with 93% of CXOs expecting positive ROI within 1-3 years, but only 9% report a holistic approach to adoption. Key roadblocks remain around data readiness (with 72% feeling unprepared), the need for regulatory guardrails, and translating pilots into production at scale. The session proposes an 'IMPACT' framework—Infrastructure, Measurable outcomes, Policy, AI as Horizontal, Citizen Centricity, and Talent—as a pathway to move from pilot projects to widespread, meaningful deployment of AI in industry. The conversation closed with a call for a holistic, design-centered approach to AI, human-machine teaming, and strengthened Germany-India collaboration to solve scaling, innovation, and regulatory challenges.
- The German Indian Innovation Corridor (JIIC) positions industrial AI as a €3 trillion opportunity by creating innovation superhighways between Europe and India.
- SAP-Oxford Economics survey: 23% of Indian business processes now use AI, expected to rise to 41% within 2 years.
- 93% of CXOs in India expect positive ROI on AI within 1-3 years, but just 9% are adopting AI holistically across organizations.
- 72% of enterprise leaders report they are not yet 'data ready' for effective AI integration.
- Panel introduced the 'IMPACT' framework for AI scaling: Infrastructure, Measurable outcomes, Policy, AI as Horizontal, Citizen Centricity, and Talent.
- Germany and India are highlighted as complementary AI innovation partners—Germany's strength in regulation and manufacturing, India's agility and digital capabilities.
- Industrial AI adoption must move from a narrow, vertical, pilot focus to horizontal, strategic integration enterprise-wide.
- Panelists stress the essential role of human-machine collaboration for the future of industrial work.
- Regulatory frameworks like India's DPDP (Digital Personal Data Protection) law were recognized as enabling, not constraining, industrial AI scaling.
- Holistic, design-thinking approaches and robust cross-border partnerships are critical to overcoming enterprise AI barriers.
From AI Sandboxes to National Health Infrastructure | India AI Impact Summit 2026
The session at the India AI Impact Summit 2026 focused on the transformative potential of AI for making healthcare inclusive, accessible, and universal across India. Distinguished leaders from healthcare delivery, technology, research, and public policy discussed how AI and digital public infrastructure (DPI) can bridge critical gaps in healthcare access, affordability, and quality, especially for underserved populations. Concrete examples from Maidanta Hospital demonstrated production-scale, AI-driven TB screening and agentic AI systems enhancing doctor-patient interactions, highlighting real-world deployments beyond theory. Panelists emphasized the socioeconomic diversity of India, the need for robust digital and ethical frameworks, and the importance of human-centric, empathetic care integrated with AI. Challenges such as workforce distribution and out-of-pocket healthcare expenditure were addressed, with AI-driven solutions proposed to empower non-physician providers and improve universal health coverage outcomes.
- Session theme: 'AI for Inclusive, Accessible, and Universal Healthcare', bringing together leaders from healthcare, technology, research, and policy.
- Discussion built upon Indian and global experiences of collaboration, digital infrastructure, and AI innovation in public health.
- Highlight: Maidanta Hospital's deployment of AI-driven TB detection in mobile vans—reduced time for screening from hours to 30 seconds, screening 200 patients per day, four days a week across six vans, with no radiologist involvement.
- Adoption of 'agentic AI' in doctor-patient consultations at Maidanta: AI listens to conversations in local languages, structures prescriptions, and enables doctors to focus on empathy and communication rather than typing.
- Panel stressed India's challenges—diverse population, uneven healthcare workforce distribution, financial barriers, and access issues.
- Approx. 6-7% of Indians pushed into poverty annually by healthcare expenses; AI can help reduce costs and bridge both access and quality gaps.
- AI deployment tied to digital public infrastructure (DPI), technical and ethical governance frameworks.
- Session moderator emphasized a shift toward real-world, production-ready solutions rather than theoretical pilots.
- Proposal to train non-physician healthcare providers with AI-driven diagnostic and management algorithms on handheld devices for point-of-care diagnostics.
AI for Inclusive Economic Growth: From Ideas to Impact
The session at the India AI Impact Summit 2026 spotlighted the urgent need to harness artificial intelligence (AI) for inclusive economic growth, with a focus on public service delivery and digital infrastructure, especially in the context of the Global South and India. Panelists emphasized that widespread inequalities—digital, economic, and social—persist despite AI advancements, and that while the global North and South alike remain mainly consumers of foundational AI models, a crucial opportunity exists for developing locally-relevant stacks that reflect unique cultural, linguistic, and societal needs. The speakers advocated for a rethink of infrastructure—moving beyond the default adoption of frontier models and toward public service AI stacks aligned with digital public goods and robust governance frameworks. The dialogue further highlighted the roles of government, entrepreneurs, and open standards in co-creating trusted, adaptive, and context-sensitive AI systems to empower citizens and ensure accountability, with a special stress on bridging the digital divide and embedding principles of sovereignty and inclusivity in both technology and policy.
- AI's evolution in the last decade is viewed as a force for good, but has also highlighted and amplified pre-existing inequalities, especially in access and participation.
- Most foundational AI models are built in and for a few countries in the global North, with the global South (including India) mostly as consumers, prompting calls for locally-adapted AI systems.
- A significant digital divide endures in India and globally; governments must focus on investments in connectivity, digital skills, and social protection systems to support inclusive growth.
- Panelists underscored the need for AI infrastructure built on digital public goods, governed transparently and accountably, and respectful of local cultures and citizen sovereignty.
- Entrepreneurs play a key role in building context-specific AI services for citizen needs; policy and procurement systems should actively empower and incentivize local innovation.
- Rapid advances in AI require policymakers, governments, and private sector to perpetually revisit and adapt their strategies, moving away from static playbooks.
- India's implementation of distributed digital governance (e.g., India Stack, Aadhaar, UPI) is seen as a model, but further standardization and new stack evolution are needed for scalable, trustworthy applications.
- Call for public service AI stacks: interoperable, open, and trustworthy infrastructures that can support both government and entrepreneurial innovation, with standardization as a key enabler.
India’s AI Infrastructure: Turning Vision into Reality
The opening session of the India AI Impact Summit 2026, 'India's AI Infrastructure: From Vision to Reality,' brought together leading industry figures, government officials, and policymakers to map India's journey toward building a globally competitive and inclusive AI ecosystem. The panel underscored India's transition from AI experimentation to widespread, real-world implementation, emphasizing the necessity for robust infrastructure, energy-efficient computing, trusted domestic manufacturing, and upskilling at scale. Key announcements included the release of a comprehensive white paper outlining actionable steps to strengthen AI value chains, the government's focus on manufacturing capacity (e.g., PLI for IT hardware 2.0), and the imperative to democratize AI benefits via public-private-academic collaboration. Discussion points stressed the critical role of data strategy, hybrid cloud infrastructure, and regulatory approaches tailored to specific AI risks and use cases. The Andhra Pradesh government's goal to ensure every family benefits from AI use cases and entrepreneurship demonstrates ambitious nationwide scaling. The session sets the agenda for India's imminent leap in AI value creation while ensuring transparency, explainability, and societal impact remain at the forefront.
- Summit marks India's transition from AI pilots to real-world, scaled implementation across industries.
- Release of a white paper detailing actionable recommendations for strengthening India's AI infrastructure and value chain.
- Focus on resilient, inclusive, and globally competitive AI infrastructure, anchored in domestic manufacturing and public-private partnerships.
- PLI for IT Hardware 2.0 cited as a key government policy to boost local server and hardware manufacturing.
- India houses 20% of global data generation but only 4% of global data center capacity—closing this gap is a strategic priority.
- Promotion of energy-efficient solutions (e.g., liquid cooling for chips) to address the environmental footprint of AI.
- Emphasis on precision regulation: tailored, sector-specific AI governance with focus on transparency, auditability, and explainability.
- Public-private-academic collaboration highlighted as essential for co-developing technologies and building indigenous capabilities.
- Panelists advocate for open-source AI models and platforms for scalable, inclusive adoption.
- The Andhra Pradesh government set a target for every family to have an AI use case and entrepreneurial opportunity.
AI Innovators Exchange: Accelerating Innovation Through Startup and Industry Synergy
The session at the India AI Impact Summit 2026 underscored India's historic and transformative moment as a major global center of digital innovation and AI-driven wealth creation, with first-generation entrepreneurs creating $450 billion in new wealth, marking India's entry into the seventh largest man-made wealth creation wave since the industrial revolution. Speakers drew parallels to gold rushes, the industrial revolution, and the rise of Silicon Valley to highlight the exponential acceleration of innovation cycles and adaptation speeds, emphasizing how AI is fundamentally altering business paradigms—from the way products are developed to who gets to participate in this new economy. The discussion also showcased best practices in public-private partnerships (PPP) and philanthropic interventions: Sep Nelwal (CEO, Polygon Foundation) shared how a $500 million blockchain-powered relief fund supported India's COVID-19 response, including the delivery of 116 million syringes for mass vaccination; Maruti Suzuki’s Rohan Chhatwal described the company’s strategic shift to rapidly partner with over 200 startups in advanced technologies, resulting in 32 becoming tier-one suppliers and over ₹200 crore in collaborative contracts; and Microsoft India’s Manju Dusana outlined the company's focus on democratizing AI and fostering rural innovation, urging inclusivity so all segments of India’s population have equal opportunity to benefit from the AI wave. The session set the stage for a discussion on advancing 'AI for Good' with a focus on speed, adaptation, equitable access, and ecosystem collaboration.
- India’s first-generation entrepreneurs have created $450 billion in new wealth, positioning the country at the center of a modern economic revolution.
- The current opportunity is described as the seventh largest man-made wealth creation event since the industrial revolution in 1750.
- Innovation cycles are compressing rapidly: launching new products has shifted from multi-year timelines to mere weeks or days, driven by AI.
- Sep Nelwal and the Polygon Foundation raised $500 million for pandemic relief and delivered 116 million syringes, helping vaccinate one in three Indians during the first wave.
- Maruti Suzuki, through its startup engagement program, has screened 6,000+ startups, collaborated with 200+, and brought 32 startups into its supply chain, awarding over ₹200 crore in contracts.
- Microsoft’s AI initiatives focus on inclusion, partnering with organizations like Unati AI to bring innovation, opportunity, and digital literacy to rural and underrepresented communities.
- The cost of business has decreased and the speed has increased by 100x due to digital transformation and AI.
- There are now at least nine trillion-dollar companies globally (compared to only Microsoft, Exxon, and GE previously).
- AI is not just a tool but a companion—and potential competitor—in the workplace, prompting urgent calls for widespread reskilling and adaptation.
- A key call to action was made to ensure the AI revolution does not exacerbate inequality but fosters opportunity for all, especially rural and underserved communities.
Advancing Agricultural Transformation through AI & Digital Public Infrastructure |
This session at the India AI Impact Summit 2026 provided a deep dive into the digital transformation and AI integration within India's agri-dairy sector, showcasing Amul's AI-powered advisory system 'Suran', which delivers personalized, cattle-level guidance to over 3.66 million smallholder farmers, predominantly women, across local languages and both feature and smartphones. The panel explored challenges and breakthroughs in building unified digital platforms, evolving from siloed solutions to value-chain approaches, and highlighted the critical role of accessible, contextual advisory, impact measurement, and personalization in maximizing farmer benefit. A significant focus was placed on deploying AI for effective weather forecasting, decision-support, and scaling impact—reaching tens of millions of farmers—while underscoring the importance of impactful message design and coordination among stakeholders. Recent advancements also point to a nationwide momentum towards integrated software infrastructures, farmer and livestock registries, and AI-powered platforms as the bedrock for farmer risk mitigation, inclusivity, and sustainable agricultural development.
- Amul has integrated the entire cattle-to-consumer value chain and registered over 10 million cattle, serving 3.66 million smallholder farmers, mostly women, across India.
- 'Suran', an AI-powered, 24x7 advisory platform in local language (Gujarati), delivers personalized, real-time guidance to farmers via phone calls and chat (with over 1 million app users and feature phone solutions).
- Suran leverages a rich, integrated digital database—tracking milk collection, cattle profiles, vaccination, treatment history, and more—enabling context-sensitive, cattle-level recommendations.
- AI now supports both on-demand advice and proactive guidance, including health emergencies, nutrition, government schemes, and artificial insemination schedules.
- Chat and voice interfaces in regional languages ensure inclusivity for farmers with low literacy or feature phones.
- Unified digital ecosystems are replacing siloed departmental applications, improving efficiency, data synergies, and farmer value chain solutions.
- PhD's past initiative, 'Kishi Samr', delivered advisory services to 38 million Indian farmers and is expanding to 7 million farmers in Ethiopia, demonstrating scalable impact.
- AI-enhanced weather forecasting is prioritized as weather events pose the largest risk to farmer livelihoods; advanced models offer timely, localized predictions accessible to farmers.
- Effective impact requires messaging that is not only accurate but context-sensitive and actionable, tailored to diverse farmer understanding and needs.
- India's agri-digital landscape is rapidly evolving toward unified registries for farmers and livestock, cross-platform data-sharing, and deep AI integration for advisory and risk mitigation.
Women in AI: A South Asia Perspective on Equity and Leadership
The session 'Women and AI: A South Asia Outlook on Representation, Equity and Empowerment' at the India AI Impact Summit 2026 focused on the persistent gender disparities in AI-related education, workforce participation, and leadership across South Asia. Tim Curtis from UNESCO highlighted stark imbalances, such as the fact that women make up less than 30% of AI engineering talent in India and even lower percentages in neighboring countries. Yuson Kim presented initial findings from UNESCO's forthcoming 'Gender and AI Outlook Study,' detailing how, despite incremental progress, gender gaps remain prevalent across multiple industries and roles. Emad Karim from UN Women introduced critical data from LinkedIn analyzing how generative AI is set to reshape jobs and skills, warning that women in South Asia are predominantly employed in sectors most vulnerable to AI-driven disruption and face systemic barriers to transitioning to more secure positions. The panel discussion reinforced that gender is central—not peripheral—to tech policy debates in India, especially in light of recent legislative shifts addressing gendered harms online. Collectively, the session advocated for deliberate, policy-driven interventions, active support for women entrepreneurs and researchers, and strong institutional leadership to ensure women's inclusion and leadership in shaping AI's future in South Asia.
- UNESCO’s forthcoming 'Gender and AI Outlook Study' reveals women constitute just under 30% of AI engineering professionals among LinkedIn users in India, 20% in Nepal, and 15% in Bangladesh.
- Only 26% of AI-related academic publications in the region are led by women as primary or corresponding authors, despite 71% featuring at least one female author.
- Women in South Asia are heavily concentrated in job sectors most likely to be disrupted or only partially augmented by AI, with up to 80% of women's jobs in some countries at risk.
- Skill requirements for jobs in Asia-Pacific have already changed by more than 40% between 2016 and 2023, with projections indicating a 71% change rate due to AI acceleration.
- Women are less likely than men to successfully transition from disrupted sectors to safer, more AI-resilient jobs.
- Session underscored the need for robust policy interventions, targeted upskilling programs, and financial support for women entrepreneurs in AI.
- Recent revisions to India’s IT rules addressing gendered online harms were referenced as evidence that gender is now a central issue in technology governance.
Futures-Ready Policymaking: AI Literacy for Global Digital Governance
The session focused on the imperative for policymakers and diplomats to develop 'future literacy'—the ability to anticipate and actively shape the global digital policy landscape amid rapid technological advancements such as AI. Led by representatives from the German Federal Foreign Office's Data Innovation Lab, the discussion outlined the complexity and urgency facing international digital governance, emphasizing that reactive, crisis-driven approaches must be replaced with proactive, foresight-driven policymaking. Four provocative scenarios for the future of digital governance (from hyper-centralized regulation to decentralized, community-driven systems) were shared to illustrate possible trajectories. The participants highlighted the need for embedding future literacy in policymaker training, fostering cross-sector, multistakeholder collaboration—especially between India and Germany—and ensuring digital and AI literacy for effective, responsible adoption and negotiation of digital policy at both national and international levels. The session underscored the importance of bridging knowledge and practice gaps among policymakers to ensure informed, anticipatory responses to technological disruption.
- Policymakers and diplomats need 'future literacy' to anticipate and navigate AI-driven changes in global power dynamics.
- Four scenarios for 2035 digital governance were presented: weakly regulated but ubiquitous AI, unreliable digital systems driving analog fallback, hyper-regulated state-centric control, and decentralized, community-led digital ecosystems.
- The German Federal Foreign Office Data Innovation Lab and the India-Germany Digital Dialogue are advancing joint digital transformation and cross-national engagement.
- Scenario planning and foresight methods enable preparedness for multiple plausible futures, rather than reliance on a single vision.
- Current gaps in digital, data, and AI literacy among policy professionals risk creating disconnects between technological adoption and an understanding of their societal and political ramifications.
- Embedding future literacy and digital literacy into ongoing training for policymakers is advocated as a mandatory (not optional) part of responsible governance.
- Multilateral collaboration, public-private partnerships, and inclusive policy processes are emphasized as crucial for shaping trustworthy, human-centric digital infrastructures.
Financing AI Futures: Digital Foundations for Asia-Pacific
This session at the India AI Impact Summit 2026 spotlighted India's ambitious progress towards building AI-ready digital infrastructure through an integrated strategy emphasizing public-private partnerships and digital public goods. Speakers from the Asian Development Bank (ADB) and the Government of India discussed the structure of India's National AI Mission—a five-year, $1.2 billion initiative focused on democratizing AI access and enabling innovation at scale. The session detailed India's substantial investments in foundational digital infrastructure, including payment, identity, and data systems, which now underpin rapid AI adoption. Key policy and program directions include deploying computational capacity via shared public-private models, indigenous foundation model development tailored to India's diversity, a central AI data/model repository 'AI Kosh,' targeted support for use cases with high public value but limited commercial appeal, workforce upskilling through AI labs, trusted AI/governance frameworks, and patient capital for AI startups. The discussion highlighted tangible outcomes, with a jump from 438 to 38,000 GPUs for AI compute in under two years through demand aggregation and transparent partnerships, and a robust pipeline of indigenous models, datasets, and innovation labs. The model balances central government investment in shared foundations with broad private sector involvement in applications, aiming for a leap in productivity, governance, and equity, and serving as a blueprint for other emerging economies.
- India's National AI Mission launched less than two years ago with a $1.2 billion (10,000 crore) budget focusing on democratizing AI access and benefits.
- Digital public infrastructure advances include 21.7 billion monthly payment transactions (valued at $290 billion), 660 million DigiLocker digital identity users, and Aadhaar covering over 99.5% of India's population, leading to $42 billion in savings via direct benefit transfers.
- AI compute capacity has scaled from 438 GPUs to 38,000 GPUs within two years, achieved through innovative public-private partnership models and demand aggregation.
- The seven-pillar India AI strategy encompasses compute access, indigenous foundation model development, a national data/model repository (AI Kosh), an application development initiative, future skills training, trusted AI/governance, and startup financing.
- 12 indigenous AI startup companies are building foundation models; 30 data & AI labs have been set up in technical institutions, with a plan to reach 270 labs.
- The state prioritizes foundational infrastructure and catalytic support for non-commercially viable AI use cases, leaving commercial innovation to the private sector.
- Synthetic data/hackathon models enable innovation while protecting sensitive public datasets (e.g., national cybercrime database).
- APAC regional cooperation and lessons shared, positioning India's model as an exemplar for scaling AI in developing economies.
Whose Language, Whose Model? Public-Interest Multilingual LLMs
The session at the India AI Impact Summit 2026 provided an in-depth exploration of the vital need for meaningful multistakeholder participation across the AI system lifecycle. Moderated by Ali Bhata of the Center for Democracy and Technology, the panel featured Jalak Kakar from the National Law University in Delhi and Dhan Raj Thakur of George Washington University, both of whom shared insights on governance, policy, and community involvement. The panelists highlighted that while AI deployment accelerates globally, conversations and decision-making powers are often centralized among industry or governments in the global North, sidelining civil society, affected communities, and local experts—especially from the global South. Drawing parallels to the shortcomings of social media regulation, Jalak stressed the necessity of moving quickly from voluntary commitments and soft law to binding standards, regulations, and enforceable mechanisms. Dhan Raj underscored the importance of redistributing power in AI development by prioritizing the needs and expertise of affected and local communities, drawing lessons from international development. Both speakers advocated for participatory approaches that engage communities throughout design, deployment, and evaluation, rather than as an afterthought or mere compliance. They called for nuanced frameworks matching stakeholder roles to different AI lifecycle stages, with genuine engagement over tokenism. New reports and initiatives were also alluded to, aiming to establish tools for integrating diverse stakeholders, especially to address concerns related to linguistic and cultural contexts and the right to refusal or redress. The session set the stage for collaborative discussions on turning these principles into structured, contextualized policy and practice.
- Panel called for a shift from voluntary or soft law frameworks to enforceable regulations and standards in AI governance to prevent repeating social media governance failures.
- Highlighted the exclusion of the Global South, civil society, and affected communities from AI policy conversations dominated by industry and government actors in the Global North.
- Dhan Raj Thakur emphasized the need for participatory power redistribution, advocating for 'machine in the loop' models centering end-user needs and local contextual expertise.
- Panel underscored that genuine community involvement requires tailored frameworks at different AI development lifecycle stages—not a one-size-fits-all, box-ticking approach.
- Jalak Kakar previewed a forthcoming report by the Center for Communication Governance that details stakeholder roles and tools for safety and governance across the AI lifecycle.
- Emphasis placed on the importance of considering both linguistic and cultural contexts, to avoid 'multilingual' becoming a proxy for 'multicultural' in AI system design.
- Session promoted new cross-sector initiatives aiming to incentivize, structure, and enforce meaningful participation and accountability beyond just consultation.
Global Tech, Local Impact: Governing AI Where It’s Deployed
This session of the India AI Impact Summit 2026 focused on the increasingly important theme of 'AI implementation for middle powers,' with a distinctly sharp lens on the concept of digital sovereignty. Moderated by Akash Kapoor, the panel brought together leading voices from academia, policy, and regulatory spheres to explore how countries outside the US-China tech duopoly—such as India and many in the Global South—can assert agency in AI adoption. Panelists discussed the dual dimensions of the debate: the geopolitical drive toward sovereignty as a shield against dominance by 'digital empires,' and the grassroots imperatives of on-the-ground AI model development that truly reflect local languages, cultures, and priorities. Key findings highlighted India's ambitious multi-layered push for sovereign AI capacity (from chip design to data and application layers), the rapid global uptick of government-funded sovereign AI projects (which tripled between 2024 and 2026), and the practicalities of creating infrastructure that enables nations to choose and control their technological destinies. Issues of innovation, user privacy, security, and representation of underresourced cultures and dialects were central, painting a vivid picture of sovereignty as a strategic, infrastructural, and cultural challenge. The panel made clear that the sovereignty continuum is not an on/off switch: every country must weigh its own blend of autonomy, collaboration, and capacity building in AI.
- The concept of AI sovereignty is gaining momentum globally, moving beyond the US-China tech race and focusing on national agency for middle and smaller powers.
- A new 'sovereign AI index' reveals that, between 2024 and early 2026, the number of government-funded national AI infrastructure/model/data projects grew from 40 in 30 countries to 130 in over 50 countries.
- India is one of only four countries (alongside South Korea, Taiwan, and Sweden) developing sovereign AI across all stack layers: chips, models, and data.
- India frames AI sovereignty not just in geopolitical terms but as vital for cultural representation, stressing the need for indigenous models that reflect India's over 100 languages and dialects.
- Panelists from Ofcom (UK) and companies working across the Global South identified three central sovereignty challenges: ensuring innovation access, user rights/privacy, and cybersecurity.
- For resource-constrained nations, the panel stressed the need to build context-appropriate ('frugal') digital infrastructure, enabling local data residency, representation, and economic value creation.
- Emergence of open-source AI and alternative hardware options (e.g., non-US GPUs) is broadening pathways for countries seeking digital self-reliance.
- Sovereignty is highly contextual: each nation's approach must be calibrated to its trade position, security needs, governance priorities, and infrastructural realities.
AI and the Future of Work: Skills, Jobs, and Labour Market Transformation
The session highlighted the transformative potential of voice AI-enabled solutions tailored for India's vast linguistic, cultural, and demographic diversity. Presenters from Indus described building 24/7 multilingual voice AI platforms capable of providing empathetic and efficient support in crises and everyday scenarios, including customer service, government helplines, social welfare access, and services for underserved populations such as seniors, juveniles, and farmers. Emphasis was placed on the need for AI systems to interpret not only multiple languages and dialects but also local emotions and urgency, thus ensuring timely and relevant responses at scale. The speakers stressed the importance of human-centric design, emphasizing empathy, inclusivity for illiterate and visually impaired users, cultural respect, transparency, and ethical governance as cornerstones of trustworthy AI adoption in India.
- Indus has developed a scalable multilingual voice AI platform designed to support India's 1.4 billion people across 22 official languages and over 18,000 dialects.
- The platform focuses on accessibility for underserved groups—including farmers, seniors, juveniles, and non-tech populations—by enabling voice-based interaction without the need for apps or websites.
- Voice AI offers immediate, empathetic, 24/7 support for a broad range of scenarios: disaster relief, national scheme awareness, customer service, mental health helplines, elder assistance, and more.
- Speech recognition models are homegrown and trained to understand not only dialects and languages but also emotion and urgency, ensuring appropriate triage and referral in crisis situations.
- Human-centric design is central: systems are built with empathy, inclusivity (including support for illiterate and visually impaired users), continuous feedback, and cultural context (e.g., addressing users with familiar greetings and respectful language).
- Cultural trust is reinforced by language localization and culturally specific interactive cues (like 'namaste ji') that enhance user comfort and engagement.
- The technical infrastructure is engineered for exponential scaling during sudden demand spikes (like floods or crises), with robustness to imperfect network conditions and noisy inputs.
- Transparency, clarity of AI identity, privacy, ethical governance, and cultural respect are foundational design and operational priorities.
India AI Impact Buildathon 2026 | AI for Social Good & Digital Fraud Prevention |
The opening session of the India AI Impact Summit 2026 set an ambitious and inclusive tone, highlighting the significance of the AI Impact Buildathon as both a catalyst for nationwide talent identification and a demonstration of India's growing prowess in the AI sector. Distinguished leaders from government, industry, and education convened to celebrate the culmination of a four-month journey that engaged over 40,000 participants from 600+ cities, democratized AI learning through workshops and online resources in multiple languages, and culminated in a tightly contested national final in New Delhi. The event underscored the government's commitment to fostering responsible AI innovation, supporting grassroots startups, and embedding AI literacy at all levels, with direct initiatives to ensure diversity, access, mentoring, and support for future AI entrepreneurs. Notable government, industry, and education leaders emphasized the urgency for India not just to consume but lead in AI—through initiatives such as providing low-cost compute, data access, funding, and mentorship. Key achievements cited include nationwide hands-on workshops, record-breaking participation, partnerships with industry (notably HCL Tech), and demonstrations of world-leading digital public infrastructure, as exemplified by the rapid rollout of UPI for foreign delegates. The session positioned the summit as a defining moment to empower the next generation of AI builders and propel India's ascent as both a skills and innovation-driven global AI powerhouse.
- Over 40,000 participants from 600+ cities registered for the India AI Impact Buildathon, including students, graduates, professionals, and non-tech backgrounds.
- The buildathon featured an innovative four-stage contest structure: enhance, evaluate, pitch, and celebrate, with 850 finalists comprising the top 2% of entrants.
- Democratization of AI learning achieved via free workshops, online skilling resources in native languages, and hands-on training at 48 venues nationwide, directly training 10,000+ students.
- GUI, the online skilling platform, reported supporting 4.5 million learners globally in 20 languages, and holds two Guinness World Records for AI skill outreach.
- Significant government backing highlighted by the presence of Sh. Abhishek Singh, Additional Secretary, MeitY and CEO of India AI, emphasizing responsible AI adoption, support for startups, mentoring, and access to compute and datasets.
- Industry-academia collaborations (e.g., HCL Tech, MongoDB, educational institutions) played a pivotal role in grassroots AI outreach and workshop delivery.
- Recognition of India's digital public infrastructure (DPI) achievements, with the extension of UPI services to foreign delegates during the summit.
- Strong emphasis on building indigenous AI solutions with real-world social impact—in agriculture, healthcare, education, and for people with disabilities.
- Summit positioned as a nation-building movement, seeking to make India a global AI skills and innovation superpower by empowering 'AI builders' rather than just consumers.
Multilingual AI in Universities: Advancing Inclusive Education
The session 'Whose Language, Whose Model' at the India AI Impact Summit 2026 focused on the urgent need for meaningful multistakeholder participation throughout the AI lifecycle — from design to deployment and beyond. Panelists critiqued the dominance of Global North-led, industry-driven conversations and highlighted the importance of centering voices from civil society, policy experts, and, crucially, affected communities. The discussion referenced learnings from prior experiences with social media regulation, cautioning against an overreliance on voluntary commitments and advocating for a timely shift toward stronger regulatory frameworks. Emphasis was placed on power dynamics, urging that future AI governance and development must empower local communities to define use cases and solutions suited to their needs, rather than imposing top-down technological interventions. The session also called for regulatory or policymaking approaches that not only incentivize but mandate participatory processes, drawing on decades of participatory development research to shape more inclusive, accountable, and effective AI systems.
- Session highlighted persistent gaps in AI governance where industry and Global North governments dominate the agenda, sidelining civil society and affected community voices.
- Panelists called for moving from voluntary commitments and 'soft law' toward enforceable standards and regulations to ensure accountability in AI system development and deployment.
- Lessons from past regulation of technologies (e.g., social media) underscore the risks of delaying regulatory action and over-prioritizing innovation at the expense of oversight.
- Multistakeholder participation must be substantive, not tokenistic; power should shift to local communities who should help define use cases and deployment contexts for AI tools.
- Participatory methods from international development and other technology governance fields provide actionable blueprints for effective community engagement in AI.
- There is a need to codify participatory requirements into AI standards, regulations, and procurement processes to avoid repeating mistakes made in social media governance.
Competition and AI: How Innovation Accelerates at Scale
The session at the India AI Impact Summit 2026 centered on the accelerating role of competition in fostering AI innovation, with a robust debate among policy experts, competition economists, and startup advocates about concentration risks and digital sovereignty. Panelists highlighted how dominant digital platforms, largely from the US and China, control vital layers of the AI technology stack—such as foundational models, cloud infrastructure, and distribution channels—posing significant risks to innovation, autonomy, and national sovereignty for countries like India. The discussion underscored the need for competitive, open-source, and locally controlled AI ecosystems, while acknowledging the challenges posed by capital intensity and existing market power. Indian startups, meanwhile, appear more focused on collaboration with global tech giants rather than immediate concerns over market dominance, although experts warned of the potential pitfalls of such dependencies, including predatory acquisitions and restrictive partnerships. The session concluded with calls for stronger regulatory safeguards, clear competition policies, and proactive measures to ensure that the benefits of AI innovation reach a broader base and do not reinforce entrenched monopolies.
- Digital platform dominance in AI is mirroring earlier patterns of concentration in search, cloud, and app ecosystems, risking control over key digital infrastructure.
- Major players in AI (mostly from the US and China) exert control over ~92% of AI development resources, including foundational models and cloud infrastructure.
- Concentration risks are heightened by capital intensity and hardware dependencies, making it difficult for countries like India to participate meaningfully beyond the consumer interface.
- India faces real challenges regarding data sovereignty and the ability to counteract the vertical integration of global tech giants.
- Barriers to entry in AI occur both upstream (compute, models, cloud) and downstream (distribution, monetization), challenging the notion that ‘a thousand flowers will bloom’ at the application layer.
- Examples were cited where dominant firms have moved proactively to capture value in critical sectors such as healthcare by leveraging equity or revenue-sharing models tied to AI adoption.
- Despite these risks, Indian startups currently view partnerships with big tech as opportunities, with 75% of funding in the last year coming from Google, Microsoft, and Amazon (who also own the critical infrastructure).
- Startups often see success as being acquired by these giants, reflecting limited pathways to independent scaling.
- Regulators are urged to scrutinize partnerships and acquisitions more closely due to their implications for competition, data control, and bundling practices.
- There is an urgent need for a competitive, open-source AI stack that leverages affordable compute, local cloud, and locally trained models to drive homegrown innovation.
- The session calls for vigilant, preemptive regulation and new competition policy frameworks to tackle the rapid evolution and complexity of AI markets.
Sustainable AI in Practice: Global Best Practices and Lessons Learned | Panel Discussion
The session at the India AI Impact Summit 2026 explored cutting-edge directions in AI, including early discussions on Artificial General Intelligence (AGI) in India, the practical application of Large Language Models (LLMs), and the drive for sustainable AI deployments. Panelists highlighted the importance of collaborations between industry, academia, and government to foster innovation and enhance real-world project-based learning. The session also showcased empirical results from the Green Mind Sustainable Hackathon, focusing on concrete sustainability metrics such as the Energy Intensity Score (EIS), which measures energy consumed per unit of AI output. Speakers emphasized the need for green micro-data centers, tracking AI's carbon footprint, and developing standardized frameworks for sustainable AI. They further discussed how policy measures—including government tax incentives linked to efficiency metrics—are driving enterprises to treat sustainability as an engineering and business imperative, aligning AI growth with India's environmental and economic priorities.
- Discussions on AGI (Artificial General Intelligence) indicate it's still in early stages, with Indian thought leaders questioning if LLMs alone are adequate.
- Emphasis on building physically-aware AI systems capable of real-time decision-making.
- Strong advocacy for bridging academia and industry through project-based, real-world AI learning opportunities.
- Announcement of partnerships, such as establishment of a Center of Excellence with Blaze (Silicon Valley firm) and broader collaborations with giants like Google and Pearson.
- Insights from the Green Mind Sustainable Hackathon (Bangalore, Oct-Nov 2026) revealed successful AI innovations in healthcare (breast cancer treatment) and public sector (RTI assistant).
- Introduction and adoption of the Energy Intensity Score (EIS), measuring total energy use divided by total cost (or useful work), as a key metric for sustainable AI.
- Real-time tracking of EIS during hackathons led to notable optimizations and architecture creativity, with energy efficiency becoming a competitive metric.
- Few AI organizations currently track energy or carbon footprint rigorously; less than 50% do so, indicating room for improvement.
- Government of India incentivizes AI efficiency through tax benefits up to 2047, contingent on demonstrated energy efficiency and resource optimization.
- Panel underlined the need for moving beyond 'sustainable AI' as a marketing checkbox toward it becoming a measurable engineering discipline.
- Promotion of green micro-data centers and distributed edge computing to reduce resource consumption and democratize AI access.
- Panelists advocated developing multi-level, actionable sustainability metrics tied to business objectives and governance frameworks.
Hardware-Rooted AI Sovereignty: Building Trusted Infrastructure for the Global South
This session from the India AI Impact Summit 2026 showcased India's rapid strides in AI enterprise adoption and hardware sovereignty. Speakers discussed the complex challenges faced by large-scale enterprises in rolling out AI solutions—especially in regulated sectors—emphasizing the critical roles of robust technical infrastructure, data security, and compliance. Demonstrations of AI-powered voice agents highlighted how enterprises are augmenting their workforces to achieve cost savings, higher efficiency, and deeper customer engagement. On the policy front, Secretary S. Krishna from the Ministry of Electronics and Information Technology outlined India’s focused investment in semiconductor manufacturing through the India Semiconductor Mission, mentioning the imminent launch of the Micron facility for domestic semiconductor production. Crucially, the government’s approach now subsidizes access to AI compute resources rather than direct infrastructure, dramatically reducing GPU access costs for researchers, students, and SMEs. Additional budgetary moves streamline tax policies to attract global cloud and data center investments, underlining India’s ambition to be both a major producer and accessible hub for AI hardware and services.
- Enterprises in India are moving toward large-scale AI adoption, facing specific challenges around infrastructure reliability, regulatory compliance, and trust.
- Voice-based AI agents are being positioned as the next frontier for customer engagement, offering more personal and effective communication than chatbots.
- Demonstration of 'blue machines.ai' showcased AI agents autonomously handling customer queries including compliance and data security scenarios.
- 91% of enterprises cite speed of AI deployment as a critical factor, with demand shifting from year-long projects to solutions that can be rolled out within weeks.
- The India Semiconductor Mission is progressing; Micron's semiconductor facility is set to begin production soon—marking India's entry into domestic, commercial-scale semiconductor manufacturing.
- India is strengthening access to high-bandwidth memory components crucial for AI, aiming to reduce dependence on imports.
- Rather than directly subsidizing AI compute infrastructure, the government now subsidizes access to AI compute—enabling users to access GPUs at about one-quarter of international rates (e.g., 65 rupees per GPU vs. $2.5-$3 globally).
- Budget reforms now clarify tax treatment for overseas data centers servicing global clients from India, aiming to spur further investment in cloud and AI infrastructure.
- Event attendance significantly surpassed expectations, reflecting burgeoning national and global interest in India's AI trajectory.
South–South AI Cooperation: Building a Shared Policy Roadmap
The session focused on South-South collaboration in AI policy, highlighting the unique efforts of the Africa Asia AI Policymaker Network, which brings together policymakers from seven countries—Ghana, India, Indonesia, Kenya, Rwanda, South Africa, and Uganda. The network, established in 2022 with support from the German Ministry for Economic Cooperation and Development and partner organizations, serves as a collaborative platform for sharing experiences and developing contextually relevant AI policy solutions tailored for the Global South. Panelists outlined key collaborative mechanisms such as regulatory harmonization, talent mobility, shared infrastructure, cross-border sandboxes, data exchanges, and peer advocacy. Real-world experiences from Kenya and Indonesia illustrated the challenges and opportunities of constructing AI governance frameworks in emerging economies, emphasizing the need for locally driven solutions, ethical guidelines, robust data governance, capacity building, and mutual learning across countries. The session reinforced that AI governance cannot merely be technical, but must be socially responsive and grounded in public interest to ensure equitable and inclusive progress.
- The Africa Asia AI Policymaker Network comprises officials from Ghana, India, Indonesia, Kenya, Rwanda, South Africa, and Uganda, promoting cross-regional collaboration on AI policy.
- Founded in 2022, the network is supported by the German Ministry for Economic Cooperation and Development, along with the Global Center on AI Governance and South Africa's Human Sciences Research Council.
- Key outputs include an AI policy playbook providing practical regulatory advice for policymakers in the Global South.
- Six main collaboration mechanisms were outlined: regulatory harmonization and mutual recognition, joint AI skills and talent mobility programs, shared infrastructure and data exchanges, collaborative development models, cross-border regulatory sandboxes and audits, and policy peer exchange with widespread stakeholder participation.
- Kenya's experience highlighted an initial focus on AI strategy, challenges of local stakeholder understanding, and the need for shared infrastructure and data governance frameworks.
- Indonesia launched an AI Talent Factory program to bridge the gap between education and industry, emphasizing project-based learning and the recent update of its national AI ethics guidelines.
- The network serves as a crucial bridge between evidence and decision-making, ensuring AI policies are technically sound, socially grounded, and aligned with public interest.
- Participants emphasized moving from normative discussions to actionable, context-sensitive policies and practices.
Building Language AI at Scale | Voice AI & Global Collaboration | India AI Impact Summit 2026
The session focused on the pressing challenges and significant advancements in leveraging AI and voice technologies to bridge digital, linguistic, and literacy divides in India and beyond—particularly for last-mile connectivity in health, governance, agriculture, and education. Panelists highlighted ongoing initiatives such as equipping ASHA workers with tablets, hosting hackathons to catalyze context-sensitive innovation, and accelerating the development of robust voice models that cater to India's myriad languages and dialects. Usage statistics underscore substantial adoption, with up to 400 million translation inferences per month and growth rates of 15-20% monthly, indicating a latent but rapidly surfacing demand. Key challenges remain in generating high-quality linguistic datasets, covering dialectal diversity, building digital dictionaries of place names, and handling emotional nuances in speech. Cross-sectoral, global collaboration and ecosystem development were underscored, emphasizing the need for standards, public-private partnerships, and active involvement of local startups and researchers to ensure inclusive and meaningful AI deployment, especially in social sectors and underserved regions.
- ASHA workers are being equipped with tablets, and targeted hackathons are fostering startups and youth to develop AI solutions for healthcare last-mile delivery.
- Voice AI systems in India process 15-18 million translation requests daily (up to 400 million monthly), with 15-20% month-on-month growth.
- Despite technological progress, ~300 million Indians still rely on feature phones, and many lack the digital literacy to interact with text-based apps.
- Panelists stressed that voice and inclusive language technologies are essential, not a convenience, for reaching non-literate and linguistically diverse populations.
- Gates Foundation initiatives in Africa demonstrated that just 200 hours of high-quality language data can drive Word Error Rates under 10% for underserved languages.
- Major challenges include the lack of comprehensive digital dictionaries for place names (~16 to 18 lakh locations yet to digitize), dialectal diversity, and contextual/emotional nuances in speech.
- Ecosystem-building efforts, such as those led by Masakane and others, aim to align research with practical deployment, emphasizing data quality, standardization, business case development, and collaboration with local entrepreneurs.
- Use cases with significant uptake include social sectors: health, education, agriculture, and governance, especially for service and information delivery at grassroots.
- Global collaboration and experience exchange (between India and Africa, for example) are accelerating the adoption and benchmarking of voice/language AI systems.
AI Innovators Exchange: Accelerating Startups Through Collaboration
The session at the India AI Impact Summit 2026 brought together prominent figures from government, industry, startups, agriculture, and academia to discuss the rapid proliferation of AI in India, emphasizing the importance of responsible and ethical AI development, inclusive growth, and bridging research with real-world impact. Arvind Kumar of STPI showcased the organization's pivotal role in fostering 1,800 startups across tier 2 and 3 cities, noting a dramatic shift as all innovation centers now incorporate AI. Mastercard’s Ravi Aurora highlighted the importance of scaling AI responsibly, underpinned by trust, resilience, and accountability, positioning India as a global leader in developing AI governance norms. Ankush Sabharwal, founder of Bharat GPT, encouraged grassroots AI creation and domain-driven innovation by Indians, while Vive Raj of Panama Corporation spotlighted the need for greater AI investment in agriculture, outlining breakthroughs using AI to optimize pollination and climate in vertical farming. Lastly, Prof. Nitin Sakena from IIT Kanpur called for academia and policy to move beyond pure research, advocating for stronger industry-academia collaboration to translate AI advancements into real-world benefits for Bharat.
- STPI now supports over 1,800 tech startups, 62 out of 70 centers being in tier 2 and 3 cities.
- STPI has established 24 domain-specific Centers of Entrepreneurship, all providing end-to-end support from ideation to global market access.
- A major shift: all STPI centers now integrate AI, moving from an initial two dedicated AI centers to company-wide adoption within three years.
- Arvind Kumar defined 'Responsible AI' by the FAST-P framework: Fairness, Accountability, Security, Transparency, and Privacy.
- Mastercard's Ravi Aurora underlined that India's 'AI for Bharat' is unique in targeting population-scale, inclusion, resilience, and trust.
- India aspires to co-author global AI governance, not just adopt norms, especially around transparency and accountability.
- Ankush Sabharwal (Bharat GPT) emphasized the democratization of AI creation tools, urging individuals to be creators driven by domain knowledge.
- Agriculture receives less than 5% of global AI investment—Vive Raj demonstrated applications such as AI-driven pollination systems and intelligent microclimate management to increase yield and scalability.
- Professor Nitin Sakena urged policymakers to foster more than traditional research—promoting actionable academia-industry initiatives for direct impact.
Unlocking Scientific Equity | AI, Access, and the Future of Global Research
The panel at the India AI Impact Summit 2026 addressed critical aspects of implementing AI in healthcare, focusing on explainability, trustworthiness, and responsible usage. Experts underscored the importance of traceability, bias mitigation, privacy, and accountability in the deployment of AI-powered solutions, especially in clinical settings. Recent policy moves, such as mandating AI training in India's medical education curriculum, were lauded as significant steps forward. Government representatives detailed efforts to expand access to digital clinical materials in medical colleges, with initiatives like 'One Nation One Subscription' aiming to bridge resource gaps, particularly in rural and aspirational districts. Speakers highlighted the challenge of balancing AI accuracy with the probabilistic nature of clinical decision-making, emphasizing the need for causal reasoning frameworks. The discussion also stressed embedding empathy and compassion into AI-driven healthcare, reflecting a holistic approach that aligns with the national goal of 'happiness for all.' Responsible, equitable, and compassionate AI integration—anchored by globally recognized principles—was upheld as vital to maximizing benefits and minimizing risks as India scales up digital healthcare transformation.
- Explainability and traceability in AI solutions are essential for healthcare professionals to trust and adopt AI tools.
- AI training is now mandatory for medical education in India, marking a significant policy advancement.
- Initiatives underway to provide digital clinical material to 57 government medical colleges, with plans for national scale-up via 'One Nation One Subscription.'
- Government is working to expand MBBS seats from 30,000 to 118,000 and increase the number of medical colleges to about 800 to address access and equity.
- There is a focus on leveraging AI and telemedicine to compensate for faculty shortages and uneven resource distribution, especially in rural areas.
- Bias in AI can be introduced as early as the way clinicians frame questions to AI tools; continuous vigilance and training are critical.
- Privacy and data protection remain paramount, with data usage strictly limited to its intended purpose.
- The integration of AI in healthcare must balance clinical reasoning (sensitivity, specificity, probabilistic science) with the demand for accuracy and causal inference.
- Compassion and empathy should be embedded alongside technical capabilities for holistic, patient-centered care.
- Adherence to globally recognized responsible AI principles (such as a set of nine referenced principles) is central to safe and equitable AI adoption in Indian healthcare.
Agentic Commerce: Trust and Identity in the AI Economy
The session delved into the transformative impact of agentic technology in commerce, examining both current limitations and expansive potential. Panelists discussed 'tacit knowledge'—the nuanced, unstructured preferences and experiences not easily captured in data—and how its integration remains a future challenge for AI agents. Nonetheless, rapid advances have meant that agents are increasingly capable of understanding and acting on consumer preferences, fueled by a growing wealth of user-generated and behavioral data. Merchants are now proactively demanding agentic solutions, marking a pivotal industry shift. Real-world examples highlighted trust, user consent, and safety as critical factors, especially as AI-powered agents begin to handle sensitive transactions autonomously. The conversation emphasized the need for robust trust frameworks incorporating agent identity, granular dynamic authorization, provenance, and strong consumer redress mechanisms. With AI facilitating transactions outside traditional storefronts and transforming merchants' roles toward building intent-driven communities, the need for governance, secure identity management, and intentionally designed friction points was highlighted to safeguard user agency and prevent consumer harm in the emerging AI-driven commerce landscape.
- Tacit knowledge—implicit human preferences not captured in datasets—remains a key limitation for agentic models but is a recognized area for future AI advancement.
- Agentic commerce is shifting rapidly as merchants themselves now directly request agent-based solutions, inverting the traditional technology adoption model.
- Transition from structured (click, transaction) to unstructured (conversational, behavior-driven) data is expanding agents' ability to understand consumer intent.
- Trust, secure consent, and deinstantiation after use were cited as essential to consumer adoption, with a real-world financial transaction exemplifying these principles.
- PayPal and other platforms are developing foundational infrastructure patterned on Aadhaar (for identity), UPI (for transaction flows), and interoperable networks to support agentic commerce.
- Agentic commerce has shown a 7-9x improvement in consumer engagement and conversion versus traditional methods in some apparel-related use-cases.
- AI’s disruption of commerce is altering the role of merchants from transaction points to community-building and intent platforms.
- Agent trust frameworks must deliver cryptographically-secured agent identity, dynamic granular authorization, and provenance (e.g., through blockchain-like methods) for accountability.
- Redress mechanisms, similar to those in current financial systems (e.g., Mastercard’s dispute resolution), are vital for assuring consumer protection in agentic transactions.
- Security solutions must consider conversational modalities (voice vs. chat) and prevent leakage of sensitive data to LLMs or other agents.
- Governance must ensure that seamless automation does not eliminate meaningful consumer oversight—deliberately maintaining friction and requiring user-in-the-loop participation for high-stakes decisions.
Making AI Inclusive: Bridging Communities to Shape India’s AI Future
The session at the India AI Impact Summit 2026 centered on the launch and discussion of 'JanAI'—an initiative aimed at democratizing AI in rural India, particularly by empowering youth and marginalized communities. The session opened with a video created by young innovators, illustrating AI’s potential to drive self-reliance much like the historic spinning wheel. Key highlights included the release of the UGRaF (Youth Growth, Resilience, Aspirations, and Future Readiness) report, surveying 3,000 rural youth across 20 states, and revealing strong awareness and daily AI usage even in remote areas. The panel, featuring leaders from academia, non-profits, and industry, stressed the importance of human-centric, inclusive AI design, taking into account the specific needs of women, marginalized communities, and the agricultural sector. Initiatives such as Project Nanda, digital green's farmer chatbot, and the vision for AI agents for every citizen were spotlighted. Experts emphasized that AI’s potential reaches beyond social impact, opening new avenues for economic growth and career opportunities in sectors traditionally overlooked by top talent. Attention was drawn to the perils of algorithmic bias and gender-based violence, underlining the need for inclusive governance and ethical frameworks in deploying AI solutions.
- Launch of the UGRaF (Youth Growth, Resilience, Aspirations, and Future Readiness) Index and report, surveying 3,000 rural youth across 20 Indian states: 91% were aware of AI and 55% are daily users.
- JanAI initiative aims to reach a million youth annually to foster AI adoption and awareness, especially in villages.
- Over 100 collaboration partners are involved in the JanAI movement, with 30 present physically at the summit pavilion.
- Digital Green’s AI-powered farmer chatbot has reached over 1 million users, 45% of whom are women, and operates in 8 Indian states.
- Project Nanda (MIT Media Lab collaboration) and launch of the 'Duth' DPI layer are underway to democratize AI agents for all citizens.
- Panelists called for inclusive design processes, actively listening to needs of rural populations, women, and marginalized groups, and ensuring their representation in solution development.
- Concerns highlighted regarding algorithmic biases, gender-based violence enabled by technology, and the importance of ethical AI governance.
- India’s demographic and digital strengths—the 'three Ds' (data, digital rails, demographic dividend)—were underscored as strategic levers in scaling AI’s social and economic impact.
Responsible AI in the Enterprise: Frameworks, Challenges, and Solutions
The session at the India AI Impact Summit 2026, hosted by Sapion, provided a comprehensive exploration of responsible AI adoption within enterprises, focusing on frameworks, challenges, and implementation solutions. Shri Ain Shagaraj G.I. (Deputy Director General, Department of Telecommunications, and AI guidelines drafting committee member) delivered the keynote, highlighting the increasing ubiquity of AI in enterprise and critical infrastructure, and outlined the government's five-layered framework for AI governance. This framework emphasizes embedding legal instruments, regulatory oversight, technical enforcement, and voluntary compliance directly into AI systems as intrinsic features. Key lifecycle stages—data governance, model governance, runtime governance—were dissected, with detailed examples of policies, standards, standardized assessment processes, tools, and compliance mechanisms positioned as essential for operationalizing responsible AI. The session acknowledged gaps in standardized practices within Indian enterprises and encouraged holistic, measurable, and auditable alignment across governance layers. Panelists then discussed the evolving incentives—particularly regulatory, financial, and reputational risks—that are driving enterprises to integrate responsible AI at scale, marking a shift from theoretical commitment to practical adoption.
- India's emerging responsible AI framework integrates legal, regulatory, and technical governance intrinsically into AI systems.
- The Office of the Principal Scientific Advisor released a key report outlining a five-layer governance model: policy, standards, standardized assessment processes, standard tools/metrics, and voluntary compliance/certification.
- AI governance is mapped across the entire AI lifecycle: data collection/in-use (data governance), model training/assessment (model governance), and AI deployment (runtime governance).
- Enterprises are encouraged to use standardized tools and processes to ensure fairness, accountability, and transparency—moving from ad hoc solutions to structured, comparable, and trustworthy frameworks.
- Voluntary certification schemes are proposed to help startups and MSMEs compete on trust and safety with large technology firms.
- Identified gaps include lack of standardized assessment processes, fairness metrics, and structured incident reporting in enterprise AI adoption.
- Regulatory, financial, and reputational risk are now key incentives for enterprises to prioritize and operationalize responsible AI.
- Responsible AI is positioned as a holistic, lifecycle-aligned process—not just a policy, badge, or standalone technical control.
AI for ALL Challenge & Panel on Leveraging AI for Development in the Global South
The session at the India AI Impact Summit 2026 provided multifaceted insights into the practical challenges and strategies for AI startups, especially those navigating the critical phase between prototype and adoption—the 'valley of death'. It began with highlights from Dr. Kaneka Singh Rao, whose company leverages AI and 3D bioprinting to create disease models, reducing reliance on animal testing and providing digital twins for medical and cosmetic research. The panel discussion that followed, moderated by Ms. Magna Bal, zeroed in on the specific obstacles faced by AI ventures across healthcare and agriculture. Panelists emphasized that many AI startups fail not due to technological shortcomings but because of issues like access to real data, regulatory hurdles, misalignment with end-users (doctors, farmers), and insufficient focus on building trust with the value chain rather than just the final user. The discussion stressed the need for patience, sector specificity, deeper academia-industry-investor collaboration, and the importance of both digital and physical (fidgetal) approaches to scaling impactful AI innovations in India.
- Dr. Kaneka Singh Rao presented an AI-powered 3D bioprinted disease modelling platform, creating digital twins of human diseases for pharma and cosmetics R&D, aiming to phase out animal testing.
- The 'valley of death'—the gap between prototype and adoption—claims up to 90% of deep tech startups due to high cash burn, lack of real-world data, regulatory barriers, and insufficient early revenue.
- For healthcare AI, the main challenges identified were: lack of access to real patient data for model training, regulatory restrictions limiting who can use diagnosis models, and the necessity to win over doctors as key users.
- In agriculture AI, 80-90% of startup failures occur between successful pilots and real-world adoption by farmers, compounded by India's hyperlocal farming context, data limitations, and farmers' risk aversion.
- Trust intermediaries such as village level entrepreneurs and grassroots organizations are essential for scaling AI in agriculture; startups should focus on serving these trusted actors ('fidgetal' model) rather than only targeting farmers directly.
- The summit audience reflected a diverse cross-section from academia, industry, government, education, healthcare, agriculture, and climate sectors, but highlighted the need for more policy presence to support patient, long-term deep tech journeys.
- Interactive poll tools (Mentimeter) were used to gather real-time audience feedback on startup vulnerabilities—top issues included model building, customer acquisition, regulatory barriers, scaling, and monetization.
Scaling AI for Resilience: Institutional and Community Readiness in a Changing World
The opening session of the India AI Impact Summit 2026 focused on the transformative potential of AI-driven, hyperlocal resilience systems to address climate risks and disasters in India. Keynote speaker Mr. Das detailed how AI, when combined with socioeconomic and climate data, empowers local governments, communities, and agencies to act quickly and effectively at scale, especially in high-risk regions like Puri and Klo. Highlighted by real-world case studies from the organization SEEDS, the session covered innovative uses of AI for anticipatory disaster action, real-time needs assessment, and community-centric parametric insurance. Specific initiatives included deploying AI to identify at-risk households ahead of cyclones, heat waves, floods, and landslides, enabling faster recovery and more equitable relief. The inclusive approach put strong emphasis on embedding public voices, local data, and women-led decision-making into the deployment of AI technologies. Ongoing collaborations between NGOs, government, and local stakeholders were celebrated, and the session called for scaling these innovations across India to realize the vision of inclusive, impact-driven AI. The stage was set for a day of high-level panels showcasing how inclusive technological innovation can drive climate and disaster resilience nationwide.
- Unveiling of a 'systems-driven AI-enabled hyperlocal resilience approach' integrating socioeconomic and climate data with local governance.
- Commendation of the India AI Mission by the Ministry of Electronics, IT, and the Government of India to build an inclusive AI ecosystem.
- SEEDS showcased real-world AI disaster resilience projects, including needs assessment systems based on satellite imagery and community inputs.
- Case study: Successful deployment of parametric insurance using AI-driven trigger matrices for 2,500 families in Tamil Nadu.
- AI models leveraged to identify hyperlocal heat risks by combining data on temperature, humidity, housing, work patterns, and vulnerability.
- Introduction of anticipatory action frameworks enabling faster disaster response across 18 states and more than 800 districts.
- Inclusive consultations led to adjustment of insurance thresholds and ensured registration of insurance in women's names to enhance social equity.
- AI-driven risk profiling in Himalayan regions to identify household-level vulnerability to floods, landslides, and water scarcity.
- Clear focus on collaborative implementation with local communities, embedding public expertise and voices in AI systems.
- Summit agenda prioritizes scaling risk reduction solutions nationwide by involving multiple stakeholders including government, community leaders, and NGOs.
Information Integrity as Infrastructure: Empowering Youth in the AI Age
The session at the India AI Impact Summit 2026, hosted by Moira Patterson of ILE E Standards Association, focused on the critical theme of 'Information Integrity as an Infrastructure for Trust' in the age of AI. Key panelists, including representatives from AI Commons, OECD, UNESCO, Mastercard, and others, examined the urgent need to address rising challenges posed by misinformation, disinformation, and deepfakes in a rapidly evolving digital ecosystem. The session highlighted the widening gap between policy development and AI technology deployment, stressing the importance of collaborative, proactive governance frameworks and standards to ensure information trustworthiness and societal resilience. Speakers underscored the need for multi-stakeholder involvement—including public sector, private sector, and civil society—and pointed especially to the vulnerabilities of youth in these emerging ecosystems. Case studies and references were made to ongoing regulatory efforts globally—such as the EU AI Act and various national restrictions on youth access to smartphones—with calls to better synchronize innovation and policy, and to move beyond voluntary principles towards robust, systemic governance to safeguard democracy, individual agency, and public trust.
- Session theme: 'Information Integrity as Infrastructure for Trust' in AI ecosystems.
- Panel included experts from AI Commons, UNESCO, OECD, Mastercard, and more.
- Rising incidents of AI-related misinformation and deepfakes, as tracked by OECD.
- Highlighted the 'two-speed' problem: rapid AI deployment outpacing slower policy/regulatory frameworks.
- Need for proactive, collaborative governance frameworks, emphasizing interoperability, accountability, transparency, and stakeholder inclusion.
- Pointed out unique risks to youth and children in navigating AI-powered information environments.
- Referenced EU AI Act, UK and national measures (e.g., banning smartphones for under 16s in Australia, France, Spain).
- Emphasis on moving beyond voluntary principles and market-based self-regulation; called for systemic approaches and enforceable standards.
- Private sector, especially large professional associations, committing significant resources to AI ethics frameworks (e.g., 75,000+ members in India, IEEE).
- Concrete example: Scientific submissions using AI-generated references that turned out to be fabricated, highlighting trust challenges even in academia.
Responsible AI for Bharat: Trust, Safety, and Global Leadership
The opening session of the India AI Impact Summit 2026 centered around India's pivotal role in shaping the future of artificial intelligence with an inclusive, impact-first approach. Distinguished panelists discussed the importance of AI development uniquely tailored for India’s diverse socio-economic and linguistic context, highlighting the country's frugal innovation (notably with DPI), and the need to evolve from being a consumer of global AI models to becoming a robust creator of indigenous, trustworthy AI systems. Emphasis was laid on embedding governance and trust at the foundational stage—rather than as an afterthought—ensuring that AI systems are transparent, accountable, and equitable for India's 1.4 billion+ population. The panel underscored the critical role of multilingual capabilities, fairness benchmarks, localized datasets, and making AI accessible to all, along with building talent and a bottom-up ecosystem that turns Indian students and citizens into creators rather than just users. Policy asks included the development of India-specific governance mechanisms to reduce compliance costs, proactive trust and safety frameworks, and global leadership in responsible AI. The keynote from the Minister of State marked the summit’s global significance and India's aim to be at the forefront of the AI-enabled economic and technological trajectory.
- The summit distinguishes itself by focusing on AI's inclusive impact, aiming for benefits to reach all 8 billion people globally—not just select populations.
- India's economic potential in AI is highlighted with projections ranging from $70 trillion (open standards) to $170 trillion (American standards), surpassing the current global economy.
- India's digital public infrastructure (DPI) and digital payments revolution demonstrate success in tailored, frugal innovation for complex, multilingual, and diverse populations.
- India is encouraged to move from AI consumer to AI creator—developing indigenous models using local datasets, regional compute resources, and agentic approaches.
- Call for transforming the world's largest higher education system (over 40 million institutions) into a nationwide creator ecosystem, empowering citizens as AI builders.
- Panelists stress embedding trust, governance, and accountability in AI systems from the outset—moving beyond compliance to scientific measurement and meaningful oversight.
- India-specific governance mechanisms are needed to lower compliance costs and build capacity, avoiding wholesale adoption of costly global standards.
- Benchmarks and evaluation methodologies for fairness, accuracy, and multilingual capability must be India-centric, as Western standards do not align with Indian diversity.
- A commitment to making AI accessible, fair, and humanized for all users, ensuring diverse voices are part of ongoing system feedback and improvement.
- Panelists confirm willingness to endorse voluntary global commitments for fairness and accuracy in AI linguistic content.
- Keynote by Minister of State for Commerce, Industry, Electronics, and IT welcomes the summit and underscores its historic significance for India’s technological future.
Sovereign AI and National Security: India’s Digital Path
The session at the India AI Impact Summit 2026 focused on the critical importance of building sovereign, AI-enabled digital public infrastructure in India. Panelists emphasized the necessity for data residency, robust data governance, and open, interoperable data standards as foundational to AI innovation and national security. The discussion recognized that, amid escalating geopolitical risks (e.g., the Russia-Ukraine conflict and the control of compute resources like GPUs), India must strive for sovereignty at every layer of its digital infrastructure, especially in compute and data management. Experts underscored the acceleration of open-source AI models, which are rapidly catching up to proprietary offerings and allow more transparency and control, and stressed a shift from exclusive focus on model training to production-ready inference and a 'toolbox' approach with diverse, task-specific AI models. The session also examined the public-private collaboration required for effective AI implementation—suggesting challenge-based models and hackathons—and the need to embed ESG considerations as India scales national AI and data infrastructure. Applied AI in defense, critical infrastructure, and real-time monitoring were highlighted as particularly urgent, given rising threats and high-velocity innovation.
- Panelists called for urgent creation of cognitive, sovereign digital public infrastructure in India, citing global risks to digital independence.
- India’s talent pool and expertise are strong; main bottlenecks are in compute resources and the potential geopolitical leverage of supply chains (e.g., GPU access).
- Data sovereignty hinges on foundational measures: robust data residency (as mandated by the DPDP Act), governance, and interoperability using open table formats.
- Examples from defense illustrate the growing data demands: European MoD platforms handle 2.5 petabytes; India's needs are at least 10 times higher.
- Panelists recommend leveraging open-source AI models, which lag proprietary ones by only six months, and prioritizing production-ready inference over proprietary training.
- Diversity of AI models is essential; smaller, specialized models yield cost and reliability benefits compared to larger ‘one-size-fits-all’ models.
- AI can now effectively process multimodal (structured and unstructured) data, enabling advanced analytics in sectors like defense (e.g., camouflage effectiveness, maritime surveillance).
- Public-private collaboration is key—challenge-based programs and hackathons were suggested to bridge innovation and deployment.
- ESG (Environmental, Social, and Governance) considerations should be integral to large-scale data and AI infrastructure development, particularly given the environmental impact of data centers.
Beyond Guardrails | Adaptive AI Governance in the Global South
The session at the India AI Impact Summit 2026 highlighted the critical integration of AI and technology into India's judicial and healthcare systems, emphasizing their evolving role from peripheral to fundamental components. Panelists discussed the need for transparent, constitutionally-anchored, and human-centric AI frameworks, with special focus on maintaining judicial empathy and ethical oversight. In healthcare, the challenges of regulating dynamic AI systems were addressed, advocating for adaptive governance models that can respond to rapid technological changes while ensuring trust, safety, and context-relevant innovation. International collaborations, like India's participation in the Global Regulatory Network, and the use of dynamic regulatory tools such as the Navigator, were showcased as vital steps toward equipping India—and the Global South—with frameworks for responsible and equitable AI adoption. The panel concluded by underscoring the importance of public trust, ethical guardrails, continuous institutional capacity building, and inclusion of diverse regional voices through platforms like The Lancet, ensuring AI-driven reforms benefit all sections of society.
- AI and digital technologies are now central, not peripheral, to India's judicial and healthcare architecture.
- The Supreme Court and other judicial bodies are actively implementing technology-driven reforms for case management, research, and access to justice.
- Judicial decisions must remain human-led to ensure empathy, constitutional morality, and contextual sensitivity; AI serves as a support tool, not a replacement.
- Maintaining transparency in 'black box' AI systems is vital—these must become 'glass box' systems accessible and understandable by legal professionals.
- Adaptive governance, rather than static regulatory models, is necessary to keep pace with rapid AI advancements in public health and justice.
- Existing regulatory frameworks are insufficient for dynamic AI; India is encouraged to adopt two-track models—regulatory guardrails throughout AI life cycle and context-specific technology assessment frameworks.
- India is a pioneer and early signatory in the Global Regulatory Network for AI in healthcare, representing 28% of the world population and leading the Global South’s approach.
- Introduction of tools like the 'Navigator' enable governments to assess readiness, identify gaps, and build responsible AI governance iteratively and contextually.
- Health technology assessment frameworks must distinguish digital and AI health solutions from traditional medical interventions, ensuring value, ethics, and equity are systematically embedded.
- Sustained public trust, continuous institutional oversight, clear ethical guardrails, and robust capacity building are highlighted as non-negotiable components for responsible AI governance.
- Inclusion of diverse regional voices, especially from the Global South, through platforms like The Lancet, ensures context-appropriate research and innovation.
How AI Is Shaping India’s Low‑Carbon Infrastructure | Global Roundtable
The session at the India AI Impact Summit 2026 explored the convergence of AI, innovation, policy, and sustainability within the context of the circular economy and decarbonization. Speakers highlighted practical AI-driven solutions in waste management, such as smart bins and citizen notification systems adopted by 36 municipal corporations, and the pivotal role of startup-corporate collaborations in accelerating technology adoption in industry. Maruti Suzuki's programs have enabled over 6,000 startup screenings, resulting in cost-effective scaling of advanced solutions, while larger global organizations emphasized the necessity of integrating decarbonization with business strategy and the mixed results of policy interventions. The discussion also underscored the complexities of sectoral decarbonization, the need for multi-stakeholder and incentivized approaches, the foundational importance of reliable data, and the contribution of philanthropy and CSR as early-stage capital for scaling impactful innovations. Overall, the session emphasized that technology, business, process, and policy innovation—driven by AI and supported by cross-sector collaborations—are essential for achieving true circularity and sustainability.
- AI-powered smart bins are being used in 36 Indian cities to optimize waste collection, reducing both fuel costs and unsightly waste accumulation.
- Citizen notification systems using AI keep residents informed about collection schedules, improving participation and operational efficiency.
- Maruti Suzuki has screened 6,000+ startups, engages in 200+ ongoing projects, and has made 32 startups tier-one partners with over 200 crore INR in business in six years.
- Engagement with startups has enabled Maruti Suzuki to run pilots at as little as one-tenth or one-twentieth the cost of large tech vendors.
- Large corporates acknowledge that industry transformation now depends on partnerships with agile startups to fill technological and talent gaps.
- Policy interventions (such as EPR, ESG) provide crucial frameworks but must be contextual, multi-stakeholder, and sector-specific to be effective.
- There is a call for a balanced mix of incentives and punitive measures to drive adoption of decarbonization and circular economy practices.
- Data quality and sector-specific AI applications are foundational enablers for operational decarbonization.
- Philanthropy and CSR (corporate social responsibility) are recognized as vital sources of initial risk capital for scalable sustainability innovations.
- True progress in sustainability requires integrated efforts in technology, process, business models, materials innovation, and value-chain-wide measurement.
AI in Action: Addressing Context-Specific Challenges | India AI Impact Summit 2026
The session emphasized a transformative shift in Indian AI entrepreneurship, moving from traditional business-oriented founding teams towards technology-first, highly technical co-founders. Speakers highlighted that the most promising AI startups are now being led by technical experts who tinker with cutting-edge tools before identifying specific problems to solve. However, the discussion also offered a reality check, noting that product excellence alone is not sufficient—successful AI innovation in India hinges equally on go-to-market strategies and accessibility, learning from and collaborating with established platforms to reach customers. The rapid adoption seen after integrating AI solutions onto platforms like WhatsApp underscores the need for a pragmatic blend of technical innovation and market connectivity. The panel concluded that, despite India's ambition to build sovereign AI, leveraging global technologies and corporate partnerships will accelerate and anchor that journey.
- Indian AI startups are increasingly founded by deeply technical founders rather than traditional business leaders.
- A shift towards 'technology-forward' company building, where solutions are created before problems are defined.
- Building great AI products must be matched with strong go-to-market strategies to ensure reach and adoption.
- An example was shared of AI adoption accelerating only after integration with WhatsApp, illustrating the importance of meeting users where they are.
- Panelists recommended leveraging existing global technologies and corporate partnerships rather than solely focusing on entirely indigenous development.
- India's ambition to become a sovereign AI nation is best served by a blended approach—marrying technical capability with practical market access.
From AI User to Creator: India’s Next Innovation Leap
This session at the India AI Impact Summit 2026, led by representatives from Microsoft Research and partners such as Karya, focused on the critical topic of evaluation and benchmarking for AI models, especially in India's multilingual and multicultural context. The speakers articulated that despite significant advances, AI systems continue to perform inadequately on many Indian languages and dialects, largely due to existing evaluation processes and benchmarks being biased towards English and Western data. The challenges of creating fair, representative, and contamination-free benchmarks were discussed, along with efforts to design community-driven evaluation processes. Recent projects such as 'Sanskriti' (now 'Samiksha'), and 'Pariksha' aim to address these barriers by involving local communities in the creation and validation of benchmarks, thus fostering trust and ensuring real-world relevance. The session underscored the urgent need for India-centric, community-owned evaluation frameworks to bridge the gap between published AI metrics and actual societal impact, which is especially pertinent for the Global South.
- India's AI evaluation landscape remains disproportionately focused on English and Western benchmarks, limiting the effectiveness of AI models for local use-cases.
- India has over 19,000 unique dialects, yet benchmark coverage for Indian languages is limited or non-representative.
- Community engagement is being leveraged to develop more representative evaluation frameworks, as seen in the projects 'Pariksha' and 'Samiksha' (collective analysis/evaluation).
- 'Pariksha' involved Karya workers in the evaluation process, pioneering community-driven benchmarking in India.
- Major challenges with current benchmarks include contamination (benchmarks seen during model training), lack of representativeness, cheating by AI agents, and misinterpretation of results.
- AI evaluation in India is pushing for tailored benchmarks for each language, culture, and context, moving away from direct translation or Western-centric evaluation practices.
- Proper evaluations help prioritize investments, ensure accountability, and create long-term impact by guiding model development and deployment for diverse user needs.
- Recent Oxford and industry studies highlight fundamental flaws in thousands of widely used AI benchmarks.
- The summit anticipates announcements of new India-focused and multilingual evaluation tools and methodologies.
AI, Agriculture, and DPI: Unlocking Economic Growth
The session at the India AI Impact Summit 2026 focused on the transformative potential of Digital Public Infrastructure (DPI) and AI in revolutionizing agricultural advisory for smallholder farmers in India and the Global South. Moderated by Nita Bassin (CEO, Digital Green), keynote speaker Sanjay Jain (Gates Foundation) highlighted the success of India’s DPI initiatives such as AgriStack and Vistar, now backed in the 2026 Union Budget, in providing scalable, low-cost digital services that empower millions of farmers with crucial information, financial inclusion, and risk management. Ethiopia’s FIDA system was cited as a replicable example. The discussion stressed the importance of coherent, interoperable systems versus siloed approaches, emphasizing sustainability, trust, and public benefit. Fatima Al-Mula (UAE Presidential Courts) detailed the UAE’s new global AI ecosystem for agriculture, including partnerships with the Gates Foundation and the establishment of the Institute for Agriculture and AI, focused on building, scaling, and localizing AI-powered advisory products as digital public goods. The panel outlined practical steps for governments to champion ownership, foster partnerships, mobilize resources, and drive effective adoption from concept to national scale, while ensuring the most marginalized benefit. Inclusion, sustainable models, and cross-border South-South collaboration were named as pillars for global agricultural transformation through AI and DPI.
- India’s Union Budget 2026 earmarks new resources for AgriStack and Vistar, digital platforms supporting AI-driven agricultural advisory.
- Krishi Samriddhi in Odisha has reached 7 million farmers at less than 18 cents per farmer per year, demonstrating cost-effective digital scaling.
- AgriStack enables national registries for farmers, crops, and plots, forming the backbone for AI solutions.
- Ethiopia’s FIDA digital ID system allows AI-enabled, localized advisory for farmers, with direct payment integration (PhD pass app).
- DPI must be nationally led, trust-driven, and open-source (e.g., Gates Foundation’s investment in MOSIP), avoiding fragmented, profit-driven solutions.
- Siloed digital systems (e.g., Nigeria’s 13 separate ID systems) are less sustainable compared to unified approaches (e.g., Aadhaar in India, FIDA in Ethiopia).
- UAE launched Abu Dhabi’s AI ecosystem for global agriculture and the Institute for Agriculture and AI at Mohammed bin Zayed University of AI, focused on developing and sharing AI agricultural products as public goods.
- UAE’s initiatives include partnering with CGIAR and Gates Foundation, mobilizing financial and technical resources, compute power, and training for countries with limited AI and infrastructure capacity.
- Critical steps for successful government adoption include strong leadership, ecosystem building, and institutional pathways from concept to scaling AI-enabled advisory.
- Panelists stressed the necessity of ensuring inclusion so benefits reach smallholder farmers, particularly women, through context-specific, local language delivery.
Redesigning the AI Economy: Ethical Data Pipelines at National Scale
This session at the India AI Impact Summit 2026 highlighted the urgent challenges in evaluating AI systems for India’s deeply multilingual and multicultural context. Panelists from Microsoft Research and the project Karya emphasized the need for community participation in building and assessing AI language models to ensure fairness, accuracy, and real-world impact. The speakers showcased their collaborative journey, including a nine-year partnership, and introduced two landmark benchmarks—परीक्षा (Pariksha) and समीक्षा (Samiksha)—which aim to address data coverage gaps, more representative multicultural evaluation, and robust, contamination-resistant benchmarking. They underscored that current AI evaluation benchmarks are heavily skewed towards English and Western data, often fail to represent Indian languages or local contexts, and suffer from methodological flaws. By prioritizing community-led, context-specific evaluation and highlighting the limitations of mainstream benchmarks (such as bias, contamination, lack of rigor, and ease of gaming), the session set out a new vision for trustworthy AI evaluation tailored to the Global South, particularly India’s unique language landscape.
- The work over the past three years has engaged thousands of people across every Indian state to build foundational datasets for fairer and better AI models.
- India has over 19,000 unique dialects/subcultures, which must be reflected in AI systems and their evaluations.
- Most existing AI benchmarks are English-centric, with poor representation for Indian languages and contexts.
- Community involvement is essential in both creating and evaluating AI models to ensure relevance and fairness for local users.
- The project 'परीक्षा' (Pariksha) was launched to engage community workers in evaluation across 10 Indian languages, setting a precedent for participatory approaches.
- A new benchmark, 'समीक्षा' (Samiksha), focuses on collective analysis and evaluation, furthering efforts for inclusive AI by integrating communities directly into the evaluation process.
- AI model evaluation remains challenging due to indeterminate outputs, prompt sensitivity, and lack of standardized metrics for open-ended tasks.
- Existing benchmarks and leaderboard results are prone to flaws due to bias, contamination (models being exposed to test data during training), and are susceptible to manipulation, which undermines trust.
Building Confidence in AI: Evaluation, Verification, and Assurance
This session at the India AI Impact Summit 2026 provided a deep dive into the emerging ecosystem for assessment, governance, and risk management in AI. Expert panelists from global organizations, academia, and policy bodies discussed the critical components for building trust in AI, including reliable assessment and reporting, robust governance structures, and the necessity of context- and sector-specific frameworks. They highlighted both regulatory and market-driven approaches, emphasizing that effective AI assurance depends on collaborative efforts among regulators, industry, community groups, and interdisciplinary researchers. Notable case studies and emerging international standards were referenced, and participants were invited to join collaborative initiatives such as the Global Assurance Sandbox. The session underscored that a one-size-fits-all model is inadequate and that AI assurance must evolve rapidly to match the pace of AI deployment across diverse sectors.
- The session focused on developing an ecosystem for assessment, governance, and risk management of AI systems.
- Panelists included leaders from the AI Verify Foundation, Oxford's IPIE, the Centre for Responsible AI, EY, ACCA, and the Stimson Center.
- Three key types of AI assessments were discussed: governance (organizational management), conformity (compliance with laws and standards), and performance (quality metrics).
- Market-driven AI assurance is emerging alongside regulatory compliance, enabling organizations to demonstrate trustworthiness to stakeholders.
- Singapore's Global Assurance Sandbox currently has 20-30 active use cases, generating case studies to inform potential future policies and standards.
- IPIE's global research has shown it is feasible to expect firms to provide model cards and data provenance information for AI auditing.
- Collaborative, multi-stakeholder engagement—including regulators, industry, and community groups—is essential for credible AI governance.
- AI audit frequency and criteria should be risk-based and context-specific; sector-level regulation is particularly important.
- A case example revealed that imported AI models can be inaccurate (e.g., Western fetal age models underestimated Asian fetuses by 30-40%), underscoring the need for local adaptation and validation.
- Participants were invited to engage with ongoing initiatives to build the global AI assurance community.
AI Literacy at Scale: Bridging Learning Gaps and Building Global AI Leadership
The session focused on the urgent need for universal AI literacy in India, emphasizing that the current sectoral and elite-driven access to technological skills cannot keep pace with the rapid advancement of AI. The speakers outlined India’s unique advantages—such as its demographic dividend, culture of frugal innovation, and diverse population—as unparalleled assets for leading the world in applied AI and AI literacy at scale. A critical economic argument was presented, highlighting the potential $1 trillion AI-driven GDP contribution within five years, which constitutes 25% of India's total GDP, with suggestions that this is a conservative estimate if real adoption occurs. The session introduced the 'AI for All Global' Universal AI Literacy Framework, grounded in international research, which champions a continual, role-based, and persona-driven approach to AI skill-building rather than one-off courses. The framework’s four pillars—Engage, Create, Manage, and Design—progress from basic awareness to technical leadership, aiming to embed AI readiness across all segments of society. The speakers warned against unequal productivity gains and a widening AI divide if India does not act, arguing that mass AI literacy is essential to transform opportunity, bridge digital inequities, and ensure India’s position as a leader in applied and inclusive AI.
- AI is projected to contribute $15 trillion globally to GDP in the next five years; India's potential share is $1 trillion, or 25% of its current GDP.
- India holds a unique demographic advantage, with a majority population under age 35, making mass skill transfer in AI faster and easier.
- India has demonstrated capacity for large-scale technology adoption, as shown by UPI and digital public infrastructure rollouts.
- The 'AI for All Global' initiative proposes a Universal AI Literacy Framework, adapted from OECD and European Commission research.
- Framework proposes a shift from isolated, technical, and English-only courses to a continuous, role- and persona-based learning model for AI literacy.
- Four interconnected pillars of the framework: Engage (awareness), Create (practical use), Manage (governance), and Design (technical leadership).
- India currently ranks #1 globally for enrollment in AI-related courses on Coursera but only 89th in demonstrated skill proficiency, indicating a major practical skills gap.
- The economic and social risks of failing to implement universal AI literacy include perpetuating an AI divide, loss of productivity gains, increased vulnerability to misinformation, and missed opportunities for economic and societal transformation.
- Speakers urge focusing not on building AI models, but on creating the world’s most AI-literate and AI-applied society.
Empowering the Human Edge: Workforce Transformation in the Age of AI
This session at the India AI Impact Summit 2026 focused on the multifaceted challenges and opportunities presented by AI adoption in education, organizational transformation, and regional innovation. Panelists discussed the necessity of robust upskilling frameworks and ethical AI use, emphasizing the integration of domain expertise with AI competencies. They highlighted resistance to change, especially among established educators and professionals, and stressed the importance of mindset shifts facilitated by curiosity and continuous learning. Key announcements included Telangana’s unification of its AI initiatives under the new autonomous entity ICOM to drive integrated innovation, skill development, startup acceleration, and global collaboration—an effort to position the state as a global AI hub. Perspectives from industry leaders underscored the operational impact of AI—ranging from workforce transformation in large conglomerates to rapid MVP development for startups—as well as the critical role of constantly updating mid-level and technical talent. Google reaffirmed its commitment to AI reskilling at scale in India, leveraging partnerships to create an AI-ready workforce capable of tackling contemporary and future challenges.
- Emphasized the need for upskilling frameworks and ethical oversight in the use of AI, particularly in education and across professions.
- AI as an enabler: True value arises when AI is combined with domain expertise (e.g., finance, computer science, healthcare).
- Discussion on the paradox of teaching with AI but assessing without AI, and the need for clear standards on AI use in education and assessment.
- Anecdotes illustrated empowering individuals (like teachers) to become agents of change for AI adoption in their own institutions.
- Highlighted employee resistance to AI, especially among experienced professionals, due to discomfort with changing established practices.
- Telangana launched ICOM, a new autonomous body that unifies all state AI and innovation initiatives to foster startup acceleration, skilling, financing, and co-innovation; goal: to become a global top-20 AI innovation hub.
- India is second in the world by number of AI users (e.g., OpenAI, ChatGPT), with significant growth in AI awareness and usability.
- Startups can now develop MVPs far more rapidly thanks to AI tools; VCs and large firms observe a reduction in analyst needs and shifts in skill demand.
- Large conglomerates like Tata Group are prioritizing upskilling for senior leadership, creating 'AI champion' roles in middle management, and continuously updating technical talent to stay ahead of rapid tech advances.
- Google reinforced its role in accelerating AI learning and skilling in India through partnerships and ecosystem engagement.
Designing Farmer-Centric AI: Standards and Policies for Smart Agrifood Systems
The session at the India AI Impact Summit 2026, jointly hosted by the Indian Council of Agricultural Research, the Department of Agricultural Research & Education (Ministry of Agriculture and Farmers Welfare, India), and the Fraunhofer Institute (Germany), assembled an international, multi-disciplinary panel of experts to discuss advancing AI standards, interoperability, and the responsible scaling of digital agriculture solutions for smallholder farmers. The focus was on identifying critical AI standards gaps, promoting international cooperation, ensuring interoperability among numerous agri-tech platforms and startups, and addressing the real needs of India's vast population of small-scale farmers. The ITU (International Telecommunication Union) highlighted its pivotal role in developing 200+ AI-related global standards (with 200 more in the pipeline), many targeted at enabling digital agriculture, and set the stage for robust, scalable, and inclusive technology solutions. Indian and German panel contributions emphasized the necessity of moving beyond piecemeal innovation towards integrated, interoperable platforms that reduce farmer input costs, improve yields, and democratize benefit delivery, with real-world examples such as a Telangana-based pilot that reduced pesticide use by 30% through startup integration and data sharing. The session underscored the challenge of breaking down data silos, coordinating across India's rich agri-institutional landscape, and operationalizing international standards to deliver concrete, scalable impacts to grassroots farmers.
- Session organized by Indian and German agricultural and research agencies, highlighting strong international cooperation.
- Panel included representatives from Indian Council of Agricultural Research, Fraunhofer Institute, ITU, IIT Delhi, CSIR, and German government.
- Focus on identifying gaps in AI standards, responsible deployment, and mechanisms for scaling AI in farmer-centric agri-food systems.
- ITU has approved over 200 AI standards, with an additional 200 in the pipeline; many apply directly to digital agriculture and interoperability.
- Specific ITU efforts include a focus group on AI & IoT in digital agriculture, and standards for smart livestock and machine-vision-based farm intelligence.
- India has 75+ agricultural tech portals and 2,018 agri-tech startups (as per the Press Bureau of India), but there are major issues with duplication and lack of interoperability.
- India has 130 million small-scale farmers (70–80% cultivating less than 1 hectare); reaching and empowering them is a stated priority.
- A German-funded pilot in Telangana integrating multiple startups reduced pesticide and chemical use by over 30%, demonstrating the effectiveness of interoperability and integrated advisory services.
- Challenges cited include data silos and need for better data governance across hundreds of research institutions, KVKs, and universities.
- Audience participation and feedback were encouraged via a QR code system, aiming for inclusive, actionable session outcomes.
AI for Water Resilience and Sustainable Growth
The session at the India AI Impact Summit 2026, chaired by Shui Capill, brought together thought leaders to debate the intersection of artificial intelligence (AI) and India's water resilience amid mounting climate and security challenges. The discussion highlighted AI's transformative potential—moving far beyond today's capabilities—to address India's worsening water crisis, affecting over 1.4 billion people and holding global repercussions. Dr. David Wood detailed seven powerful AI-driven interventions from advanced water filtration and desalination to AI-enabled strategic policy advice, underscoring immense opportunities but also the urgent need for wise human oversight. Sujit Na grounded the discussion with an emphasis on digital public infrastructure and the practical complexities of demand-side adoption, noting significant barriers in data quality, ecosystem maturity, and risk diffusion. Both speakers agreed that while AI is rapidly advancing, real impact for India's water security will require robust infrastructure, data unification, and adaptable, context-specific deployment strategies, echoing the importance of India's trajectory for global sustainability and economic stability.
- The International Center for Sustainability (ICFS) is launching the first phase of India's Water Security Project by the end of March 2026, focusing on diagnostics and solutions.
- Over 700 million people globally may be displaced by water scarcity by 2030 (UNICEF), positioning India's water management as critical for global stability.
- AI is seen as a potential game-changer for water resilience in India, with applications including smarter irrigation, improved desalination, advanced water quality monitoring, and real-time strategic advice.
- Dr. David Wood presented seven AI-powered interventions, with the seventh—AI as a strategic advisor—framed as the most transformative.
- AI advancements are proceeding rapidly, with significant breakthroughs expected in related fields like nanotechnology, biotechnology, and cognitive sciences.
- AI also introduces risks, including misinformation/manipulation and heightened uncertainty, demanding wise human oversight.
- Sujit Na warned of a disconnect between rapid AI supply-side development and slower demand-side adoption, especially for social-impact sectors like water, due to data fragmentation, lack of verifiability, and risk management issues.
- Effective AI deployment in water security requires robust, adaptable digital public infrastructure (DPI) designed for solution optionality and trust, not rigidly curated systems.
- The discussion reinforced India's centrality to global supply chains, economic growth, and the urgent need for scalable, context-sensitive AI solutions to its water crisis.
AI and Productivity: Unlocking the Next Wave of Economic Performance
The session at the India AI Impact Summit 2026 addressed the critical theme of "Sovereign AI for National Security" against a backdrop of rapid advancements in AI capabilities and shifting global geopolitics. The panel featured international and national thought leaders, including Pier Stefano of KPMG and Lt. General H Shiba of the Indian Army, who explored the global momentum behind digital sovereignty, its specific relevance for defense, and the nuanced challenges of ensuring trustworthy AI for sensitive domains. Key discussions highlighted the exponential growth in AI's abilities—especially self-replicating models—and the urgent need for countries to assert sovereignty over not just data, but also AI models, infrastructure, and applications. For the defense sector, Lt. General Shiba detailed four crucial military AI domains (persistent surveillance, cognitive warfare, autonomous systems, and decision support), stressing the importance of sovereignty at each step to mitigate automation bias, adversarial attacks, and reliability risks in uncertain combat situations. Both speakers emphasized the necessity for educational outreach, clear regulatory frameworks, context-specific assessments of what must remain sovereign, and robust national testing platforms to benchmark and validate AI systems before their deployment in national security contexts.
- Global concern for digital and AI sovereignty has rapidly expanded beyond Europe to include Canada, India, the Middle East, Africa, and Australia, driven by geopolitical tensions and advances in AI.
- Recent releases of AI models (notably GPT Codex 5.3) demonstrate AI's capability to contribute to its own creation, signifying an exponential—not linear—growth in capability and risk.
- Key policy and technical debate: Which layers require true sovereignty (infrastructure, applications, data, or AI models), and to what extent?
- Lt. General H Shiba identified four core defense domains where AI sovereignty is critical: persistent surveillance (to counter automation bias), cognitive/narrative warfare (for superior, defensive and offensive algorithms), autonomous systems (protection from adversarial attacks, data poisoning, or backdoors), and decision support (especially decisions under uncertainty).
- AI’s current inductive logic lacks human abductive reasoning, raising risks regarding overreliance and trust in high-stakes military decisions.
- A national or defense-grade AI benchmarking and certification platform is urgently needed to evaluate and authorize AI systems for sensitive applications.
- There's a call for increased education among policymakers, clearer regulatory reforms, and context-aware data and AI management strategies—recognizing that not all government data needs maximal protection.
- Session underlined the impossibility of a one-size-fits-all or instant switchover to sovereign systems; change must be approached layer by layer and tailored to context.
Reimagining Gender and Technology | Building Safer, More Inclusive AI Platforms
The session delved into nuanced perspectives on online anonymity and digital safety, especially as they relate to marginalized communities, with recommendations leaning towards risk-based and targeted identity solutions rather than universal identification mandates. Panelists stressed the need for smarter accountability and detailed policy, informed by actual user experiences, to ensure internet spaces remain both safe and inclusive. Shifting focus towards India's AI ecosystem, speakers highlighted the ongoing transition from traditional business-centric startups to technology-driven, founder-led AI ventures. Despite acknowledging India's need to catch up globally in AI, the discussion underscored optimism through recent initiatives like 'activate,' designed to empower technical founders and build foundational AI stack for Indian entrepreneurs, signaling a maturing innovation environment and an emphasis on marrying technical prowess with customer-centric business models.
- Anonymity online plays a dual role—enabling abuse but also supporting marginalized groups in safe digital participation.
- Recommendations emphasize a proportionate, risk-based approach for identity verification instead of blanket identity disclosure.
- The discussion underscores the challenge in balancing digital safety, accountability, and anonymous access.
- 'Activate' recognized as a new leading AI venture-support stack in India, active for approximately three months.
- India's AI landscape is shifting from business-led ventures to deeply technical, founder-driven innovation.
- Panelists acknowledge India is not currently leading in AI but are optimistic about acceleration through targeted ecosystem initiatives.
- Emphasis on policy grounded in lived experiences and smarter tools for digital trust and recourse.
How AI Can Transform Justice | The Future of India’s Judicial System | Panel Discussion
This session at the India AI Impact Summit 2026 focused on the integration of artificial intelligence (AI) within India's judicial system, highlighting both the opportunities and challenges of adopting AI-based tools. Panelists underscored the foundational constitutional values—liberty, equality, and justice—as essential guiding principles when implementing technology in legal frameworks. The discussion covered three main aspects of AI in judicial decision-making: intelligent perception, intelligent cognition, and intelligent decision-making. Examples were provided from India and global jurisdictions like Estonia and China, emphasizing the potential for AI to expedite case management, enhance efficiency, and overcome language barriers. However, significant risks were identified, including algorithmic and data-driven biases (such as hindsight and recency bias), the 'blackbox' problem resulting in lack of transparency, and the dangers of AI-generated hallucinations or incorrect outputs when used without sufficient human oversight. Case studies from India, the US, and elsewhere illustrated past issues with bias, fabricated legal citations, and challenges to the legitimacy of decisions made by machines instead of delegated human authorities. Panelists advocated for responsible and calibrated adoption of AI, established practical safeguards (like restricting AI to only records and ensuring transparency), and stressed the need for continuous human supervision. The session concluded with a call to focus on augmenting, not automating, human judgment and expertise with AI tools while prioritizing accountability, transparency, and data localization to ensure trustworthiness and constitutional fidelity.
- Emphasis on aligning AI implementation in the judiciary with India's constitutional values: liberty, equality, and justice.
- Three pillars of AI-based Automated Decision Making (ADM) in the judiciary: intelligent perception, cognition (including self-learning and feedback loops), and intelligent decision-making.
- India Supreme Court initiative to use AI for translation and research to overcome language barriers.
- Global case studies: Estonia's use of AI in transportation and anonymized trials; China’s Project 206 for judicial support (case management, evidence collection, etc.); US legal challenges related to algorithmic bias (e.g., COMPAS tool).
- Highlighted two major risks: algorithmic bias (translating/pre-empting human and societal biases) and the 'blackbox' problem (opacity and lack of explainability in AI decision-making).
- Specific judicial biases discussed include hindsight bias and recency bias, which can be amplified by AI if not controlled.
- Documented court cases (India and US) of AI mishandling—presentation of non-existent legal cases, wrong parties in proceedings, and improper legal citations resulting from unsupervised AI use.
- Set of safeguards proposed: confining AI to the record, preventing it from inventing/infering/giving legal opinions, traceability of outputs, logging all interactions, and transparent operations.
- Need for explicit human authority in final decisions; delegating decision-making entirely to AI is legally and ethically problematic.
- Data localization and secure storage (such as using network storage and data diodes) suggested to uphold privacy and sovereignty.
- Advocated 'augmented' rather than 'automated' justice—AI as a support tool, not a replacement for human judges.
- Called for calibrated, responsible adoption of AI, with human supervision at every critical stage to prevent bias, error, and legitimacy crises.
Embedding Trust in AI Innovation: Governance and Quality Infrastructure
The session at the India AI Impact Summit 2026 explored the transformative effects of AI on inspection, sensory evaluation, quality infrastructure, and cyber risk management in India. Stakeholders from industry, accreditation bodies, and government discussed concrete benefits such as significant reduction in inspection times, improvements in data accuracy, and the creation of digital sensory libraries. Key challenges highlighted included regulatory gaps, the need for holistic standards for AI in inspections, and ensuring AI-generated reports are legally defensible. The Quality Council of India outlined plans for Digital Quality Infrastructure (DQI), leveraging AI-powered agents for document handling, voice assistance, interoperability, and contextual knowledge management. From a policy perspective, the Ministry of Electronics and Information Technology stressed AI’s dual role in cyber risk—both enabling better defense and enhancing attack vectors—and emphasized key priorities, such as secure-by-design systems, explainability, and transparency via AI bills of material. The session positioned India as poised to leapfrog legacy processes by embracing AI-driven, user-centric, and interoperable quality and cybersecurity frameworks, drawing inspiration from national digital successes like UPI.
- AI-powered digital twin models reduced inspection times by 80% (from 4–5 days to hours) and improved data accuracy to 99% (up from 80–90%).
- Operational deployments in Europe face regulatory barriers due to frameworks oriented around manual rather than AI-assisted inspections.
- ISO 421 and emerging ISO 23894 constitute steps toward AI risk management in accreditation, though cross-supply-chain adoption remains a challenge.
- Electronic sensory labs (e-nose, e-tongue) use AI to rapidly and objectively analyze, benchmark, and digitally store aroma profiles, with a database of 275,000 odor compounds.
- AI does not replace human experts but augments decision-making and preserves institutional expertise.
- The Quality Council of India announced the upcoming rollout of Digital Quality Infrastructure (DQI), emphasizing leapfrogging traditional models via four AI-powered agents: smart scanning (keyless data capture), voice-enabled best practice guidance, smart APIs for interoperability, and adaptive knowledge repositories.
- DQI’s goal: make quality assessments accessible, affordable, consistent, and trusted—akin to how UPI revolutionized payments.
- The Ministry of Electronics and Information Technology highlighted AI’s impact on cyber defense and threats, focusing on incident response, new attack surfaces, and AI-specific vulnerabilities (poisoning, model extraction, evasion).
- Top policy priorities for AI assurance include autonomy control based on cyber risk, secure-by-design development, explainability/auditability, and transparency via an AI bill of materials.
- India, along with partners like the USA, is advancing global harmonization for supply chain transparency in AI, with published joint guidelines.
Mapping Gender Norms with AI: Rethinking Representation in Global Media
This session at the India AI Impact Summit 2026 provided an in-depth exploration into how Large Language Models (LLMs) learn and reproduce social and cultural norms, focusing particularly on the challenges of achieving genuine cultural alignment in AI. The speakers presented research that utilized scripts and subtitles from 5,400 Hollywood and Bollywood films to train models to detect and interpret culturally-specific norms, with specific emphasis on how emotions like 'shame' and 'pride' are expressed differently based on culture, gender, and socioeconomic background. The discussion underscored the potential biases and inequities that can be unintentionally amplified when training LLMs on such datasets, emphasizing the critical need for context-aware and demographically-sensitive alignment to ensure AI tools benefit all users fairly. Further, the session highlighted methodologies for extracting local norms, the importance of high-quality, representative data, and the necessity to continuously calibrate and validate trust in AI responses, especially for community health workers and other front-line professionals. Ultimately, it called for a nuanced, non-blind approach to cultural alignment that recognizes and mitigates harmful norms and social biases.
- Researchers compiled and analyzed a dataset of 5,400 Hollywood and Bollywood film scripts and subtitles to extract over 10,000 specific cultural norms.
- A novel approach using LLMs to detect violations of social norms by searching for expressions of emotions like 'shame' and 'pride' in film dialogues was presented.
- Findings concluded that shaming was 4.5 times more frequent in Bollywood films compared to Hollywood, highlighting a stronger collective, conformity-enforcing social dynamic in Eastern societies versus individualism-promoting norms in the West.
- The research noted significant gender disparity: women were depicted as experiencing shame more often, while men were associated with pride, and these biases were embedded in the training data.
- Analysis showed that economic and demographic factors influence norm enforcement—low-income individuals were shamed for behaviors for which high-income individuals were not.
- The study stressed the challenge in collecting large, high-quality, and culturally-local data for model alignment and the difficulty in accessing social media data due to privacy and cost barriers.
- Speakers called for non-blind, continuous calibration of AI alignment processes to ensure harmful or inequitable norms are not perpetuated or amplified by AI systems.
- Proposed methods included (a) direct human expert annotation, (b) LLM-assisted norm extraction, and (c) employing social science-informed validation strategies.
- Emphasized the importance of providing context-aware, demographically-calibrated AI responses, particularly for critical applications such as community health programs.
- Highlighted a need for dynamic models of trust calibration, moving away from a one-size-fits-all approach and instead adjusting trust dynamically based on context and ambiguity.
AI for Road Safety: Saving Lives Through Intelligent Systems
The session at the India AI Impact Summit 2026 featured key stakeholders from IIT Madras and the Ministry of Road Transport and Highways, focusing on leveraging AI and data-driven governance to improve road safety in India. Notable initiatives included the nationwide rollout of integrated crash databases (IRAD and EDAR), the introduction of 'Thini'—an AI-driven learning and licensing platform—and the launch of a hackathon to crowdsource next-generation road safety solutions. Odisha was highlighted as the first state to implement a people-centric, AI-supported road safety governance system. Policymakers discussed current challenges in enforcement, the integration of vehicle-to-vehicle (V2V) communications, the need for more accurate accident data, and proposed incorporating road safety and licensing into school curricula using AI-powered tools. The session emphasized public participation, empathy in technology solutions, and policy momentum to enable these innovations across India and the Global South.
- The Ministry of Road Transport and Highways (MoRTH) launched national databases IRAD and EDAR to centralize and analyze road accident data, with implementation led by NIC and foundational research from IIT Madras.
- Odisha is the first state to adopt a people-centric, AI-supported policy framework for holistic road safety governance, spanning planning, implementation, and monitoring.
- Multiple AI-based tools are now in use for engineering interventions, administrative actions, and trauma care, moving governance from individual discretion to data-driven processes.
- 'Thini,' an AI-powered, three-gate learner licensing and road safety education platform, has been announced in beta; stakeholder feedback is being solicited before formal launch.
- A national road safety hackathon—open to the public with mentoring for the first 50 registrants—aims to develop innovative, AI-enabled solutions to real-world problems like accident response and emergency services discovery.
- Policy advances include the Motor Vehicles Act's foundation for electronic monitoring, the release of a new data-sharing policy, and ongoing support to states for AI adoption through funding and technology enablement.
- Vehicle-to-vehicle (V2V) communications are being mandated (with a 30 MHz band) to enable real-time feedback for drivers, aiming to reduce unreported or non-fatal accidents and improve preemptive safety.
- There is a push to integrate driving education (supported by AI platforms like Thini) into standard school curricula, especially for increasing awareness and skill-building among young people, including girls.
- The session highlighted persistent gaps in accident data reporting at the local level and stressed the role of AI in addressing these 'dark holes.'
- Emphasis was placed on embedding empathy and human-centric considerations into technological and policy solutions for road safety.
Scaling Equitable AI Advisory Systems: From Vision to Action
The session focused on the launch and collaborative development of the AGX AI (Agriculture Extension AI) initiative, aimed at harnessing AI and generative AI to empower smallholder farmers globally through contextually relevant and equitable agricultural advisory services. Speakers emphasized the need for more than just technological innovation, highlighting the importance of ecosystem coordination, shared infrastructure, and inclusive frameworks to overcome barriers faced by small-scale farmers. The AGX AI initiative, led by the GU Foundation and global partners, revolves around eight key pillars, including data contextualization, model benchmarking, localization, sensitive handling and incentivization of farmer data, equitable service delivery, and enabling policy environments. India was highlighted as a leader, given its digital public infrastructure (e.g., Aadhaar), providing valuable lessons for other countries. The session also introduced a participatory learning agenda designed to bring together stakeholders for shared learning, improved coordination, and iterative improvement in AI-driven agriculture. The approach seeks to build comparable evidence, accelerate learning, and ultimately move from fragmented pilot projects to scalable, impactful systems benefiting diverse farming communities.
- AGX AI is a global collaborative initiative focused on empowering smallholder farmers with AI-driven, localized and context-aware agricultural advisory services.
- Emphasis on the need for ecosystem coordination, shared infrastructure, and frameworks to move technology from pilots to real-world impact.
- Eight key pillars form the backbone: contextualized data (including minor/local languages), robust benchmarking, inclusive localization, sensitive farmer data practices, equitable private sector involvement, access, and enabling policy environments.
- India is recognized as a leader through its established digital public infrastructure, with lessons to inform scalable AI adoption in agriculture elsewhere.
- Three foundational discussion papers on data, benchmarking, and models have been released, with five more in development to cover critical knowledge gaps.
- A participatory, international learning agenda is being launched to foster shared research, rapid coordination, and continuous improvement across regions and organizations.
- Trust, safety, equity, and accountability were underlined as critical to adoption and meaningful scale in AI advisory for agriculture.
- Challenges identified include incentivizing data sharing, standardizing benchmarking across multiple dimensions (model performance, user experience, impact), and ensuring sustainable, inclusive policy and governance.
- The session invited active participation to collaboratively refine the learning agenda and address thematic/practical gaps through ongoing stakeholder engagement.
AI and the Future of Skilling | India AI Impact Summit 2026
The session at the India AI Impact Summit 2026 commenced with a distinguished panel representing global leaders in academia, government, and digital innovation. The key focus was on the transformative impact of AI on skilling, human resource development, and the evolution of higher education institutions. Panelists traced the trajectory of technological disruptions—such as the industrial revolution, computers, and the internet—to frame AI as another major wave promising both challenges and opportunities for workforce development. Dr. Vijay Kumar from MIT emphasized the importance of maintaining core educational values and practices amidst change, highlighting the shift from content delivery to quality, practice-based, and active learning at scale. The group stressed a movement from standardized, degree-driven education toward competence-based, lifelong learning that addresses diverse learner needs. With AI as a disruptive force, educational institutions must innovate curricula, pedagogy, and delivery models while fostering interdisciplinary approaches to solve increasingly complex societal problems.
- Panel included figures from MIT, Indian government advisory, IICT, NSDC, and Xstep Foundation, reflecting a multi-stakeholder approach.
- Session topic: 'AI and the Future of Skilling: Strengthening Human Resources and Transformation of Higher Education Institutions.'
- Historical perspective: AI likened to past disruptions (industrial revolution, computers, internet), with each wave ultimately expanding jobs and prosperity.
- Shift in educational focus from degrees/diplomas to skill and competency-based credentials, reflecting evolving employer and learner needs.
- Dr. Vijay Kumar emphasized 'quality at scale' in education, advocating for active, practice-based, and interdisciplinary learning experiences.
- MIT’s OpenCourseWare launched in 1999, highlighted as an early model of extending educational value globally, now requiring more nuanced, personalized approaches.
- Recognition that scaling education today involves addressing diverse learner backgrounds, aspirations, and lifelong upskilling, not just mass distributing identical content.
- Call for educational institutions to innovate with new courses, formats, and pedagogies to address AI-driven changes in workforce requirements.
- Emphasis on preparing learners to apply analytical rigor and compassion to solve complex, real-world problems in interdisciplinary settings.
Inside India’s Frontier AI Lab: Impact for the Global South
The session at the India AI Impact Summit 2026 provided a comprehensive overview of India's current digital landscape, emphasizing the exponential growth in data consumption, the rapid expansion of data center and cloud infrastructure, and the critical need for domestic AI model development tailored to India's linguistic and socio-economic diversity. Speakers underscored India’s position as a leading data creator and consumer, but lagged significantly in terms of domestic data infrastructure hosting. Strategic policy interventions, such as prolonged tax holidays for data center investments and the passage of the Digital Personal Data Protection (DPDP) Act, were highlighted as pivotal in repositioning India for both national and global leadership in AI. The session stressed the necessity of indigenous AI models to effectively address local language nuances and border security, while concurrent investments in GPUs and scalable cloud were recognized as vital. Regulatory perspectives detailed how international data sharing, facilitated through robust privacy regimes and cross-border corridors, could further empower India’s AI ambitions, enabling more inclusive and reliable models suitable for both local challenges and the broader Global South.
- India has over 1 billion smartphone users with the highest per capita data consumption globally, surpassing the US and China.
- While India creates and consumes 20% of the world's data, only 3% is hosted within the country’s borders, highlighting both a challenge and an opportunity.
- In the past seven years, India's data center capacity has grown sevenfold and is projected to reach 3 gigawatts by 2030, positioning it among the largest globally.
- Despite rapid infrastructure growth, further expansion is essential for India to serve domestic needs and global AI demand, especially for the Global South.
- The government has introduced a 20-year tax holiday for data center investments in this year's budget, aiming to incentivize further infrastructure build-out.
- India is focusing on developing indigenous AI models, emphasizing the importance of local language, dialects, and cultural nuances, particularly for critical applications like defense and border security.
- GPU capacity in India is scaling rapidly, with announcements from major industry players and government-backed funding for startups.
- The Digital Personal Data Protection (DPDP) Act and related regulations have improved India’s position in global AI collaborations by addressing data protection concerns.
- Concept of 'data corridors' between countries is being discussed to enable effective cross-border data use and AI model training, exemplified by potential use cases like cross-border KYC.
- Speakers suggested future technological solutions may involve combining AI and blockchain/distributed ledger technologies to ensure both trust and reliability.
AI for Agriculture: Data, Multimodality, and Feeding the Future
The session at the India AI Impact Summit 2026 highlighted the rapid advancement and adoption of AI in agriculture, with a strong emphasis on inclusivity, co-design with farmers, data governance, and public-private collaboration. Examples such as the PlantNet platform and Farmer Chat chatbot showcase the practical deployment of AI tools for disease detection and extension services. The FAO’s Digital Agriculture and AI Innovation Roadmap was introduced, aiming to guide initiatives from conception to scale through a decentralized, federated framework and innovation hubs. Stakeholders repeatedly stressed the need for clear governance, ethical frameworks, data protection, and training to empower small and marginal farmers, while ensuring the responsible and equitable use of AI technologies. Demonstrations, incentivization, and policy support were identified as essential to accelerating adoption among ready and reluctant farmers alike, with the vision of achieving significant income gains and transforming India’s agri-food systems from Green Revolution to “Green Intelligence.” The session concluded with a call to ensure AI serves as an inclusive servant—embedding farmers' voices rather than dominating their choices—and underlined India's readiness and ambition to lead in responsible, AI-powered agricultural transformation.
- AI tools such as PlantNet and Farmer Chat are being deployed to aid small farms with disease recognition and advisory services.
- The FAO launched a Digital Agriculture and AI Innovation Roadmap—a federated, decentralized framework for bringing AI innovations from ideation to scale.
- Four key services identified in the FAO roadmap: AI governance assessment framework, AI sandbox for experimentation, a science-technology-innovation portal, and leveraging innovation hubs.
- Eleven digital public goods, including Agrovoc and geospatial platforms, are being contributed to support open data, software, and standards.
- A public-private partnership approach is encouraged for innovation, with collaboration among government, private sector, research institutes, startups, and farmer organizations.
- Demonstrated solutions, like Baramati Agriculture Trust's partnership with Microsoft and Oxford, reportedly increased sugarcane productivity 25–30%.
- Farmers in India are increasingly ready to adopt AI to boost resilience and productivity, but the key challenges are awareness, outreach, and trust in new tools.
- Incentives for companies demonstrating solutions and strong data privacy frameworks (especially amid large-scale data collection in genomics/livestock) are critical.
- The ambition is to raise small and marginal farmer income to at least 1 million rupees annually per acre through AI-driven solutions.
- Emphasis on co-designing AI tools with farmers to ensure relevance, transparency, and empowerment, rather than imposing opaque 'black box' solutions.
- The session advocated inclusive growth, aiming for no farmer to be left behind, and underscored responsible deployment of AI for the common good.
AI and the Future of Learning and Work | Leadership Dialogue
This session at the India AI Impact Summit 2026 focused on the complexities and imperatives of embedding cultural intelligence and local social norms into large language models (LLMs). The speakers highlighted the problem of LLMs being biased towards Western-centric data, which can result in outputs that don't accurately reflect Indian social beliefs and practices. To address this, researchers presented a methodology involving the systematic collection and annotation of social discourses from diverse sources—including films, subtitles, and expert inputs—to identify locally relevant norms, particularly focusing on markers like shame and pride which vary greatly between Eastern (collectivist) and Western (individualist) societies. The team extracted over 10,000 norms using advanced annotation techniques (including leveraging LLMs themselves), and analyzed the differential treatment of gender and demographics in Bollywood versus Hollywood movie scripts, finding that Bollywood scripts exhibit much higher prevalence of shame, especially related to gender role violations, compared to their Western counterparts. The session underscored the necessity for high-quality, contextually relevant datasets to avoid perpetuating bias, emphasized rigorous validation of norms to ensure equity and fairness, and advocated for ongoing research into measuring and mitigating harmful norms in AI outputs across demographic groups.
- LLMs trained on Western-centric data often fail to capture or represent Indian and other local social norms.
- Research presented involved collecting and annotating large datasets (over 10,000 social norms) from movies, social media, and expert inputs for cultural norm extraction.
- Significant differences were observed: Bollywood movies display 4.5 times more shame-related behaviors than Hollywood, consistent with collectivist-cultural literature.
- Gender analysis showed women are depicted as recipients of shame more often than men in both industries.
- Use of LLMs for automatic annotation and norm identification tested, but human expertise remains essential, especially to overcome demographic skews and biases.
- Validation of norms is crucial to remove those that reinforce societal biases and to ensure models promote equity and positive outcomes for diverse users.
- Example presented: expressions of love are coded as shameful in Bollywood but not in Hollywood, affecting AI-generated recommendations and responses.
- Researchers are actively exploring how demographic variables (gender, income) affect model suggestions and aim to ensure fair, unbiased AI outputs.
- Challenges cited include lack of access to high-quality, large-scale, contextually meaningful datasets and expensive data sources (e.g., social media APIs, copyrighted media).
- The research highlights the dynamic need to recalibrate model alignment according to the societal context, not just technical feasibility.
Hornbill Comes to India AI Impact Summit 2026 @TaFMANagaland
Leveraging AI4All: Pathways to Inclusion
The session at the India AI Impact Summit 2026 focused on the inherent challenges and critical success factors required for inclusive AI deployment in India and globally. Key observations emphasized that AI alone does not guarantee greater inclusion, particularly for marginalized or under-resourced populations, such as persons with disabilities. The speaker, Nirmal, highlighted the 'purple economy,' representing a $150B market for assistive tech products in India, as a missed business opportunity rather than a charity case. The report launched at the summit stresses three pillars for inclusive AI—design, access, and investment—with recommendations for participatory design, real-world usability, and government incentive alignment. Exemplary AI use cases included Vadwani AI's 'Shishuman' tool for community health workers, Meta's Reband smart glasses improving accessibility for the visually impaired, and the 'Yes to Access' app for mapping physical access barriers. The panel discussions reinforced the necessity of local language support, last-mile connectivity, and institutional capacity-building. Further, real-world testimonials, such as Adalat AI’s legal information chatbot and Rwanda’s Scaling Hub, illustrated both the practical challenges of AI adoption in low-resource and multilingual contexts, and the importance of ecosystem-building for scalable impact.
- AI alone is insufficient for inclusion; connective infrastructure, user interfaces, and focused skilling are necessary.
- The 'purple economy'—market for assistive tech for persons with disabilities—represents a $150B opportunity in India.
- Governments urged to serve as ‘anchor buyers’ to create market incentives for accessible and inclusive AI products.
- 3 pillars for inclusive AI: participatory design, real-world usable access (low bandwidth, offline), and aligned investment.
- 33% of the world (2.6 billion people) lack internet access, underlining the need for offline and low-resource AI solutions.
- Showcase use cases: Vadwani AI’s Shishuman tool (offline neonatal measurement), Meta's Reband smart glasses (co-designed with users), ‘Yes to Access’ accessibility mapping app, and Adalat AI’s multilingual chatbot for legal case information.
- Adalat AI improved court efficiency 2-3x via multilingual legal transcription and digital workflow tools.
- Caution advised in using AI for legal advice due to contextual complexity; information facilitation safer.
- Rwanda AI Scaling Hub exemplifies a national strategy for adapting and scaling AI aligned with socioeconomic priorities and ecosystem development.
- Institutional capacity-building in government is seen as crucial for mainstreaming and scaling inclusive AI.
📅 Sessions from 2026-02-17
From Guidelines to Ground: Institutional AI Safety in the Global South
The final session of the India AI Impact Summit 2026 featured leaders from Nvidia, Dialogue, Mastercard, legal, and diplomatic spheres discussing AI safety challenges, institutional readiness, regulatory frameworks, and the unique hurdles faced by the Global South. Panelists highlighted India’s vibrant deep tech startup ecosystem, with over 200,000 startups and 7,500 deep tech players—many deeply invested in AI. Nvidia outlined its long-term, flexible support model for AI startups, in contrast to traditional cohort-based accelerators. Policy and legal experts emphasized India’s fragmented AI governance: lacking a unified law, depending on a regulator-light, top-down approach, and struggling with capacity, especially in enforcement. Mastercard pointed to the primacy of user trust and the need for ethical, bias-minimized, human-centric design in financial AI systems, warning of user frustration from false positives. All agreed that the challenge is not just technological but involves ecosystem-wide issues—governance, compliance, clarity in liabilities, digital and regulatory literacy, capacity for oversight, and environmental concerns like AI-driven e-waste. The session called for coordinated, principle-based innovation, closing capacity gaps, clarifying responsibilities among AI actors, and iteratively developing regulation that fits India and the broader Global South’s needs, rather than importing ill-fitting external templates.
- India has over 200,000 startups, including approximately 7,500 deep tech startups, many of which are AI-first.
- Nvidia's Inception Program provides long-term, non-cohort-based support to Indian AI startups, including infrastructure, technical mentorship, go-to-market, and occasional funding.
- Current Indian AI governance is fragmented and regulator-light, with no overarching AI law—intentional to avoid burdening innovation but risking enforcement gaps.
- Capacity for institutional enforcement is limited, particularly regarding technical oversight and resources such as GPU availability.
- There is a lack of clarity in legal liability distribution across AI developers, deployers, and users, creating potential innovation and risk management issues.
- Heavy reliance on top-down regulation and the absence of robust industry self-regulation (
Agri-AI at Scale: Global Innovations Driving Food Security
The session at the India AI Impact Summit 2026 showcased a significant leap in the AI-driven modernization of agriculture, focusing on collaborative efforts between Indian and Israeli institutions. IIT Ropar (often misspelled 'roert' in the transcript) is leading development of a domain-specific Agri LLM (Large Language Model) and indigenous high-precision weather stations for Indian farmers, enabling real-time decision support in crop and soil management at a low cost. With these Made-in-India solutions being integrated in multiple states, the system leverages accurate local environmental data for actionable AI models accessible through mobile apps. Israeli partners, having invested in 34 Centers of Excellence with the Indian government and already deploying advanced irrigation technology (such as N-Drip's gravity-fed, sensor-driven smart systems), envision scaling these collaborations nationwide. The Israeli Ambassador emphasized the transformative potential of democratizing AI and knowledge directly for millions of Indian farmers. Further, the panel explored governance implications of emerging AI-facilitated biotechnology, particularly the capacity to refactor genomes for crop optimization, raising complex legal, ethical, and security issues. Together, the dialogue underscored unprecedented potential in Indo-Israeli joint ventures on smart agriculture, automation, and responsible technology deployment, with both countries seeking to ensure inclusivity, sustainable productivity, and robust governance.
- IIT Ropar is developing an Agri LLM (Large Language Model) tailored for farmers' queries, built entirely with indigenous electronics and software.
- 99% accurate weather stations engineered at IIT Ropar are already deployed across several Indian states, costing only ₹15,000 per unit and providing multi-parameter environmental data (rainfall, radiation, wind speed, soil conditions).
- Direct collaboration with farmers and FPOs ensures iterative technology development based on real-world feedback.
- Israel-India partnership has resulted in the establishment of 34 Centers of Excellence, each with $3-4 million investment, promoting AI-enabled modern agriculture including precise irrigation and high-quality seedlings.
- Demonstration projects using Israeli company N-Drip's low-energy, sensor-based irrigation technology were piloted in nine Indian locations, aimed at national scaling after ministry review.
- A vision shared to provide every Indian farmer with AI-enhanced, individually tailored knowledge (weather, planting, irrigation, pest control) directly via mobile apps.
- Emphasis on democratization of AI tools and application development—empowering farmers and agribusinesses to independently build and customize solutions.
- Unique governance and bioethical challenges identified as AI accelerates the ability to refactor DNA and engineer biotechnological crops—concerns include patents, GMOs, security, potential for misuse, and corporate consolidation.
- Call for proactive, adaptive regulatory frameworks to address risks associated with rapid AI-driven advances in biotechnology.
- Acknowledgment of joint Indo-Israeli commitment to advancing agricultural productivity, while safeguarding inclusivity and ethical deployment.
Building Sustainable and Resilient AI Infrastructure
This session at the India AI Impact Summit 2026 focused on the pivotal role of data center infrastructure in enabling AI growth in India, highlighting both the transformative potential of AI-enabled infrastructure and the need for resilient, sustainable foundational layers. Industry leaders discussed India's current data center capacity, revealing its significant growth trajectory yet identifying a stark gap compared to global leaders like the US. They credited government policies and substantial capital inflow for the sector's rapid development, while emphasizing that meaningful progress hinges on speedier execution and scaling to meet future AI and digital demands. Sustainability emerged as a central theme, with Google's global perspective underscoring the importance of embedding decarbonization and environmental considerations into infrastructure from the start, not as an afterthought. The session outlined the practical trade-offs between speed, cost, reliability, and sustainability, advocating for region-specific design and the leveraging of India's surplus power capacities. Ultimately, India's data center ecosystem was lauded for its engineering talent, incident-free operations, and growing resilience, setting the stage for rapid expansion if all stakeholders execute cohesively.
- India's total operational data center capacity stands at about 1.5 GW, compared to 20–25 GW in the US (as of end-2024).
- An additional 3–3.5 GW of capacity is under development in India, aiming for a tripling of capacity within 2–4 years, but experts argue a much faster ramp-up (up to 10 GW) is necessary.
- Capital is not a constraint; the sector attracts significant investment due to low risk and reliable returns.
- Recent government policy and regulatory changes (including tax reforms in the latest Union Budget) have removed key hurdles, stimulating further investment and easing hyperscaler entry.
- India benefits from surplus power capacity and robust transmission infrastructure, reducing typical energy-related build-out delays seen in Western markets.
- The Indian data center ecosystem boasts a strong track record for resilience and safety, with no recorded availability or security incidents impacting operations.
- Sustainability is now integral, not peripheral, to data center design – Google has executed 22 GW of clean energy globally and stressed the importance of climate considerations from materials sourcing to operation.
- Rapid growth in AI demand is spurring simultaneous infrastructure expansion and the adoption of innovative, location-sensitive design choices (e.g., choosing between air or water cooling based on local resources).
AI for Power: Accelerating the Clean Energy Transition
The 'AI for Power – Accelerating the Clean Energy Transition' panel at the India AI Impact Summit 2026 focused on leveraging artificial intelligence to propel India's grid transformation, with an emphasis on distribution, demand flexibility, and the integration of renewable energy and electric vehicles. Speakers from MEN Systems, Climate Collective, the International Energy Agency, and the International Solar Alliance outlined ongoing projects and programs, such as Electron Vibe, which supports startup-driven AI innovation in grid modernization. The panel highlighted India's ambitious targets—500 GW of non-fossil fuel power capacity, 1 crore rooftop solar households by 2027, and 30% EV sales by 2030—as context for the urgent need for a modern, resilient, digitalized grid. Practical AI use cases discussed include predictive weather-responsive building management, smart EV charging, and digital utility twins, while panelists underscored regulatory needs, knowledge sharing, and overcoming procurement and data barriers for large-scale deployment. The conversation positioned India as a key test bed for scalable, global-south-relevant AI interventions in the power sector.
- MEN Systems detailed AI applications in building management systems that optimize comfort and renewable integration using climate, occupancy, and generation data.
- Demand flexibility is a major focus, with AI enabling load shifting (e.g., EV charging, pre-cooling, water pumping) to sync with renewable generation peaks.
- MEN Systems piloted vehicle-to-load, vehicle-to-home, and vehicle-to-grid experiments demonstrating AI's role in dynamic energy storage and demand response.
- India's national targets: 500 GW of non-fossil fuel electricity capacity, 1 crore (10 million) rooftop solar households by March 2027, and 30% EV sales by 2030.
- Electron Vibe, an innovation program by Climate Collective, has engaged 22 utilities and 63 startups since 2020, launching 20 pilots to integrate AI in distribution grids.
- Electron Vibe's expanded platform features an open innovation program, a knowledge hub, and a solutions marketplace for AI tools and aggregated data sharing among utilities and startups.
- The International Solar Alliance is developing digital utility twins through its multi-donor trust fund, starting with pilots in Jaipur, to improve loss reduction and grid efficiency.
- Procurement, data access, legacy systems, and utility risk-aversion remain barriers to scaling AI in power; knowledge sharing initiatives aim to accelerate transition from pilots to full deployments.
- Panelists emphasized the increasing complexity of energy systems due to electrification and renewable variability, making AI-enabled digitalization essential for grid resilience.
Global Cooperation for Ethical and Sustainable AI in Healthcare
The panel at the India AI Impact Summit 2026 focused on embedding inclusivity and responsibility in the design and deployment of AI for healthcare. Drawing from multi-year research, the speakers presented a co-designed set of eight tenets developed through collaborations between Australia and India, guiding participatory and iterative AI development. The discussion emphasized moving from abstract principles to practical methodologies that meaningfully center patients and healthcare providers, with real examples provided around addressing gender bias in AI, the importance of user feedback loops (as showcased in a game for the hearing-impaired), and the necessity of epistemic diversity within AI development teams. The session highlighted the critical importance of not defaulting to AI as a solution, urging instead a patient- and provider-first approach, and stressed the harm that can arise when marginalized groups are not equitably included throughout the technology lifecycle. The conversation aimed to move the narrative from policy talk to actionable, on-ground impact by building systems where no one is left behind.
- Research underpinning the session identified a lack of actionable playbooks for responsible and inclusive AI, particularly in healthcare.
- A two-year, Australia-India collaboration resulted in eight adaptable 'tenets' for designing, deploying, and monitoring inclusive AI systems.
- The tenets promote iterative design, participatory feedback, flexible application, and centering users’ lived experiences across the AI lifecycle.
- Real-world case: To address gender-based accuracy disparities in a tuberculosis diagnostic AI, developers traded off some overall accuracy for improved fairness across genders.
- A ‘Sign It’ app for the hearing-impaired demonstrated the value of tight feedback loops and treating users as co-experts in building assistive technologies.
- Speakers urged a people-first approach, emphasizing that technology—especially AI—need not be the default solution for healthcare challenges.
- Diversity was discussed beyond numbers, highlighting the need for epistemic diversity and inclusive decision-making, not just representation.
- Panelists warned of the risks of ableism and other biases being scaled up through AI, underscoring the importance of centering marginalized and disabled communities in both team composition and methodology.
Generation AI 2047 | Shaping the Future of India’s AI Leadership
The 'Generation AI @ 2047: From Court to Farms – How India is Building AI for Bharat' session at the India AI Impact Summit 2026 showcased how artificial intelligence is moving beyond laboratories and boardrooms into the hands of youth, farmers, and communities across India's heartland. Driven by collaborations between organizations like IBM, the Government of Uttar Pradesh (through AI Pragya), and grassroots partners such as 1M1B, the session highlighted both large-scale policy efforts and real-world innovations by young Indians. IBM reaffirmed its commitment to skill 5 million Indians in AI and emerging technologies by 2030 through freely accessible platforms. The session included announcements on tailored AI-powered tools, such as 'Career Jyoti' for career counseling and 'Agri Shield AI' for climate-resilient advice to farmers, as well as advanced urban air quality solutions. Youth-driven prototypes tailored to regional needs were spotlighted, showing the tangible grassroots impact of AI adoption. Uttar Pradesh was presented as a model of inclusive, scalable, and youth-led AI transformation. The event underscored that India's AI leadership will emerge not just from top-down mandates, but from empowering local innovation, practical skilling, and the translation of AI into everyday problem-solving that reaches every community.
- IBM committed to skilling 5 million Indians in AI, data security, and quantum by 2030 via the free, anytime-access IBM SkillsBuild platform offering over 1,000 courses.
- The AI Pragya initiative in Uttar Pradesh has enabled thousands of students to build AI prototypes addressing local challenges, from agriculture to water safety.
- 1M1B in partnership with IBM has been upskilling Indian youth across urban and rural areas, demonstrating real-world AI adoption beyond metropolitan hubs.
- 'Career Jyoti,' an LLM-powered, regionally relevant AI career counseling tool, was launched to address the acute shortage of career advisors—only 1 in 10 students in some states have access to structured career guidance.
- Student innovators presented practical AI tools: a hyperlocal urban air pollution sentinel using satellite data, down to 100m resolution and deployable without expensive sensors; and 'Agri Shield AI,' providing local, explainable, climate-adaptive crop planning guidance for farmers.
- Collaborations extend beyond technology, with IBM working alongside national bodies (Ministry of Education, Ministry of Skill Development, NITI Aayog), and academic boards (AICTE, CBSE) to integrate AI into curricula and faculty development.
- Uttar Pradesh, as India’s most populous and youngest state, showcased scalable, inclusive AI as a case study in successful youth-driven technology adoption, aiming to bridge rural-urban divides.
- Emphasis was placed on AI as an augmentation of human talent—not a replacement—highlighting 'human-in-the-loop' and trust-centric capacity-building.
- The session set the tone for moving from rhetoric to action, with youth-led projects exemplifying AI's transformative impact at the grassroots, making AI both a tool for nation-building and societal inclusion.
AI for Health Equity: Inclusive Talent and Research–Industry Collaboration
The session at the India AI Impact Summit 2026, hosted by NIMS University alongside distinguished contributors from Ashoka University, UNICEF, and others, focused on advancing health equity through artificial intelligence. Keynote remarks underscored AI's potential to be a transformative equalizer in healthcare, particularly for marginalized populations, while emphasizing that technology alone cannot resolve systemic inequities. Presentations and panel interventions stressed the need for 'equity by design', inclusive policy and system integration, and cross-sector partnerships. Panelists shared on-the-ground insights about the critical role of accessibility, affordability, inclusive design, and robust implementation frameworks—citing national drive examples like the CoWIN platform—and highlighted the necessity of complete care continuums, data validation, and sustained motivation among frontline workers. Governance clarity, validation of AI outputs, and ensuring health system capacity were repeatedly stressed as cornerstones for deploying AI-driven health solutions. The session's dialogue highlighted the fundamental need for co-creation with affected communities, policy alignment, and practical, context-aware deployment in bridging digital divides and making the benefits of AI universally accessible in Indian healthcare.
- NIMS University positioned as a collaborative hub integrating AI, healthcare, law, and social sciences to foster real-world innovation and impact.
- Theme of 'equity by design' repeatedly highlighted as foundational—AI must be built and deployed with fairness, inclusivity, and access for marginalized and digitally divided groups.
- Accessibility (for remote/hard-to-reach areas), affordability (for both users and system implementers), accountability, and inclusive design were named as pillars for effective AI integration.
- National case study: CoWIN vaccination drive aimed to serve 1.4 billion people, with emphasis on design for disabilities and low digital literacy; critical reflection on inclusiveness for all beneficiaries.
- Expert examples showed pitfalls in AI deployment, such as successful disease screening but failure in healthcare delivery due to lack of local specialists and systemic follow-through.
- Data validation, algorithmic transparency, and integration with societal and political contexts (not just technical) are crucial to avoid perpetuating existing biases.
- Proper motivation, training, and capacity of frontline health workers are essential for successful AI implementation, as observed in both digital and manual interventions.
- Best practice models should include co-design with end-users, clear governance objectives, and ensure that digital solutions are context-appropriate and augment—not replace—human-centered care.
- Ongoing call for interdisciplinary and cross-sector collaboration: active engagement between policymakers, academia, startups, industry, and civil society.
Reimagining Public Broadcasting in the AI Era
This session focused on the challenges and opportunities surrounding access to broadcast data for AI development within South Africa and the wider African context. The speakers emphasized the crucial role of data – particularly high-quality, diverse datasets – in training effective AI models, especially those targeting African languages and contexts. South Africa’s Information Regulator is seeking amendments to outdated information access laws to better support modern AI needs, striving for a balance between data protection and enabling innovation. Priya Chetty and others highlighted how existing and emerging African Union frameworks, conventions, and resolutions on data governance and human rights provide both guidance and legitimacy for initiatives to unlock data access in a responsible, ethical, and culturally sensitive manner. Experiences from projects like Masakani demonstrated both the practical hurdles of negotiating data access with broadcasters and the critical importance of clear licensing and compensation frameworks. The session underscores a continental movement to leverage public and broadcast data for AI advancement while reinforcing the need for updated legal and ethical standards, inter-institutional collaboration, and the safeguarding of African cultural heritage.
- South Africa’s Information Regulator plans to propose amendments to the Promotion of Access to Information Act to make it fit for the AI era, taking into account the need for algorithmic transparency and modern conceptions of proactive information disclosure.
- Balancing data protection and innovation: Authorities aim to ensure personal information is safeguarded while enabling freer flow of public and broadcast data for AI training.
- African Union’s continental AI strategy prioritizes availability of high-quality, diverse datasets, cooperation between public/private/research sectors, and ethical frameworks that respect African values and human rights.
- The African Convention on Cyber Security and Personal Data Protection (2014) and AU Data Policy Framework provide blueprints for responsible data governance addressing both innovation and data security.
- AU Resolution 620 frames access to data as a human right and reinforces the importance of regional collaboration and harmonized standards for data access and AI development.
- Practical project insights (Masakani) revealed challenges in dataset licensing and broadcaster engagement, highlighting the importance of clear legal mechanisms and compensation for data use.
- Stakeholders are collaborating across regulators, broadcasters, universities, and innovation funds to develop a recognized governance framework supporting local AI/LLM modeling.
- Session reinforced the need for updated or new regulations, ethical safeguards, and revenue mechanisms to ensure sustainable, culturally relevant, and locally beneficial AI systems in Africa.
Data Sharing for AI: Building Trust, Purpose, and Public Value
The session at the India AI Impact Summit 2026 focused on the inherent frictions and debates surrounding data governance in the AI era, centering on data value extraction, sovereignty, community agency, and the rethinking of regulatory frameworks. Panelists highlighted the tensions between unlocking data for innovation and safeguarding rights, especially in contexts marked by existing social, economic, and cultural inequalities. They argued that friction within this ecosystem is not necessarily a negative, as it forces nuanced thinking about trade-offs and incentivizes stakeholders to develop models that align value creation for all parties—including data contributors, communities, and industry. Provocative perspectives—such as those advocating a 'New Deal for Data' or decentralized data marketplaces—challenge traditional data protection and copyright norms, urging policy innovation to balance equitable access, protection, and sustainable ecosystem participation. Examples from African language projects and rural health data digitization illustrate both the challenges and the potential for community-driven, sovereignty-respecting data infrastructure. Ultimately, panelists call for deliberate, context-aware frameworks that return value to data originators while enabling AI-led development and innovation.
- Panelists identified persistent tensions between data sharing for innovation and the protection of rights and sovereignty, particularly in diverse socio-cultural contexts.
- Incentive structures and viable value exchange models are critical for engaging all stakeholders in sustainable data ecosystems.
- Community-driven approaches to data management emphasize the importance of agency, addressing inequalities, and safeguarding identity and local knowledge.
- Existing regulatory frameworks such as data protection (data minimization, consent, retention limits) and copyright often conflict with AI's need for vast and persistent data access.
- The 'New Deal for Data' paper proposes rethinking data governance to reconcile innovation with protection, advocating that embracing friction can foster better outcomes.
- Decentralized data marketplaces are presented as a solution, where data sets can have variable pricing/license terms based on buyer type (e.g., academia vs. industry), preventing resource accumulation by big tech and ensuring value returns to data contributors.
- Community initiatives like Masaka exemplify bottom-up assertion of data sovereignty when state actors are absent or disengaged.
- Panelists call for policy and structural innovation to empower data contributors, enable equitable value-sharing, and build sovereign infrastructures tailored to developmental and cultural needs.
AI for ESG: Responsible Innovation for People, Planet, and Progress
The session, moderated by Dr. Maharaj Bharatwaj at the India AI Impact Summit 2026, brought together a distinguished panel of experts from India and Israel to discuss the intersection of Artificial Intelligence (AI) and Environmental, Social, and Governance (ESG) concerns. The discussion highlighted how AI is increasingly being leveraged to address ESG challenges, especially in sectors such as supply chain emissions tracking, carbon markets, and ESG reporting automation. The panel also delved into the complexities of developing policy frameworks that balance innovation, risk mitigation, fairness, and support for local contexts. Perspectives from India, Israel, and Switzerland highlighted differing regulatory philosophies across regions and underscored the importance of flexible, context-sensitive policy approaches. The conversation further emphasized the need to consider the environmental footprint of AI—particularly the energy demands of data centers—and to ensure that AI itself is developed and implemented in alignment with sustainability goals. Ultimately, the session provided a rich, multifaceted exploration of how AI and ESG can mutually reinforce each other to drive responsible innovation, regulatory clarity, and sustainable progress.
- Panel comprised leading experts from India and Israel, including representatives from government, policy, legal, academic, and industry sectors.
- AI is being used in ESG for real-time supply chain emission tracking, greenwashing detection, biodiversity monitoring, and automated ESG reporting.
- ESG data inconsistency remains a primary challenge; AI helps address this through data pattern analysis and automation.
- AI aids in risk forecasting for firms, including climate transition risk modeling and ESG-linked credit risk prediction.
- Policy frameworks must balance risk mitigation, trust-building, fairness in distribution of technological gains, and support for local innovation.
- Different regulatory approaches highlighted: EU's rights- and regulation-focused, US market-driven, China's state-centric, and India’s inclusion-oriented.
- Call for flexible, adaptive policy frameworks leveraging existing laws and regulatory testbeds rather than rigid top-down regulation.
- Growing concern around the carbon footprint and sustainability impact of AI itself, especially from energy-intensive data centers.
- Consensus on the need to align both AI's development and application with ESG principles to ensure holistic sustainability.
- Agricultural and micro-level perspectives were also addressed to ground the macro policy focus in practical, sectoral realities.
Building Trustworthy Digital Infrastructure for the AI Era
The session at the India AI Impact Summit 2026 addressed the critical need for a human-centric, values-aligned framework for AI development and governance. Panelists emphasized the lessons learned from large-scale public digital infrastructure (DPI) projects—such as India's DEPA (Data Empowerment and Protection Architecture)—highlighting success factors like leadership vision, institutional credibility, and bottom-up implementation. A recurring theme concerned the urgent necessity for technical and legal mechanisms ensuring personal data protection, user agency, and interoperability of AI systems. Speakers warned against repeating the pitfalls of previous technology waves, particularly the dominance of centralized, surveillance-oriented platforms, and instead advocated for sovereign AI agents that prioritize user control. There was strong consensus around building globally resilient, agile governance systems that can rapidly adapt to technological change while remaining grounded in openness, accountability, and inclusivity, with international efforts (including those by the UN, G7, and G20) seen as significant but needing greater interoperability and speed. Panelists stressed the importance of continued institutional agility, bridging the gap between technologists and policymakers, and educating users to empower their choices in the era of proliferating AI agents.
- India's DPI model, including DEPA, is viewed as a global benchmark for trustworthy, scalable digital frameworks, now being extended to AI governance.
- Call for AI agents and platforms to be truly user-controlled, with emphasis on data portability, interoperability, and empowering sovereign agents versus platform-driven incentives.
- Growing concerns about repeating the errors of previous technology epochs, with current AI models tending toward centralization and surveillance—a trend panelists urged to counter through policy and technical design.
- Resilient AI governance requires two main pillars: secure, interoperable user data control mechanisms, and international agreements on safety standards and non-weaponization.
- Agility of institutions is seen as paramount: national and global bodies must dramatically increase their ability to respond to rapid AI advancements, learning from past regulatory lags such as the EU AI Act.
- Ongoing global initiatives—UN's international scientific panel on AI, G7, G20 dialogues—are making progress, but must synchronize and accelerate standard setting.
- There is a strong push for public interest-centered governance, moving away from purely profit-maximization approaches in frontier AI development.
- Bridging the gap between policymakers and technologists, and fostering multi-stakeholder participation, are necessary to build trust and credibility in AI oversight.
- Panelists highlighted the critical role of user education and agency, ensuring individuals know their rights and are equipped to navigate and control AI systems.
Women, Work, and the AI Future
This session from the India AI Impact Summit 2026 critically examined the roles women currently play in the AI value chain, emphasizing their concentration in the least visible segments such as data collection, annotation, and gig-based platform work. Speakers highlighted the dangers of perpetuating technical systems that are socially shallow by failing to incorporate women's unique, contextual, and community-grounded knowledge into the design, governance, and evaluation of AI systems. The session underscored the need to shift thinking from merely increasing the number of women in the AI workforce to empowering them as active knowledge holders and decision-makers. Through examples from film and real-world experiences, the panelists revealed how overlooked lived experiences of women lead to biases, misaligned applications, and new vulnerabilities in AI. The conversation called for moving gender considerations upstream in AI development, making local knowledge foundational rather than an afterthought, and fostering a system in which women transition from invisible laborers to recognized agents shaping AI’s future.
- Women are heavily represented in foundational AI tasks (data collection, annotation, platform gig work), yet remain largely invisible and marginalized within the broader AI ecosystem.
- The current distribution risks creating technically advanced yet socially disconnected and exclusionary AI systems.
- Women's lived, local, and community-grounded knowledge must be recognized as core infrastructure, not anecdotal input, for building inclusive AI.
- Panelists called out the insufficiency of 'add-on' gender audits and compliance checks after deployment; gender inclusiveness should be incorporated at the design and governance stages.
- Concrete examples demonstrated how AI models and applications fail women: e.g., health apps trained largely on male data, or AI that mislabels culturally specific images or fails at inclusivity.
- Overreliance on large datasets and scale can obscure minority and gendered perspectives, exacerbating epistemic injustice and reproducing systemic inequality.
- The session featured voices from leading figures in gender, technology, and AI governance, advocating a shift from extraction to ownership, participation to power.
- Calls to action included reimagining AI value chains, re-centering local expertise and stories, improving practices for sourcing and evaluating training data, and learning from both film and academic studies.
AI × Creativity: Skills for Innovation in the Intelligent Economy| Global Roundtable
The panel discussion, titled 'AI Meets Creativity: Skilling for Innovation in the Intelligent Economy,' gathered prominent leaders from the Indian creative, technology, and academic sectors to examine the rapidly evolving interplay between artificial intelligence and creative industries—especially film and media. Organized under the aegis of NASSCOM, the Ministry of Electronics and IT, and Adobe, with representation from academia, industry, and policy, the session illuminated how AI is revolutionizing creative workflows across pre-production, visual effects, and storytelling in filmmaking. The panel highlighted the increasing adoption of AI-powered tools, the necessity of foundational creative skills alongside technical upskilling, and the Indian government’s push to institutionalize creative arts and digital skills education from a foundational to professional level. Key announcements included the rollout of the AVGC (Animation, Visual Effects, Gaming, Comics) curriculum in 15,000 schools and 500 institutes this year, a focus on lifelong curriculum updates due to fast-evolving AI technologies, and the articulation of 'emerging intelligence'—blending artificial, emotional, and behavioral intelligence. Challenges around bridging gaps between industry needs and formal education, the importance of storytelling and cultural context, and the democratization of content ownership through AI were also discussed.
- Panelists included industry leaders such as Rana Daggubati (actor/producer), Dr. Ashish Kulkarni (IICT/AVGC pioneer), senior executives from MakeMyTrip and Cognizant, and senior NASCOM officials, moderated by Adobe’s Mala Sharma.
- A showcased student film demonstrated 99% AI-generated content, exemplifying AI’s creative potential.
- AI is dramatically accelerating and transforming stages of content creation, especially pre-production and visual effects, reducing work from days to hours.
- Storytelling, cultural literacy, and emotional and behavioral intelligence were identified as enduringly human creative domains.
- The Indian National Education Policy now allows creative arts, performing arts, design, and sports to be pursued from sixth grade onwards, breaking previous academic hierarchies.
- The AVGC (Animation, Visual Effects, Gaming, Comics) curriculum is being rolled out in 15,000 schools and 500 institutes in 2026, with foundational, consolidation, and professional stages.
- Fast technological change means creative education curricula now demand updates every semester due to emerging AI tools.
- Traditional industry-academia disconnects are being addressed via new educational mandates and practices like 'professor of practice.'
- Ownership and creation of intellectual property in film and media are shifting from studios to independent creators enabled by AI.
Enabling Development Through Trusted AI & Data Collaboration | India AI Impact Summit 2026
The session, 'Unlocking Impact: Shared Data Platforms for AI Innovation in Development Cooperation,' explored the evolving role of data and artificial intelligence (AI) in development cooperation, the persistent challenges of data silos and capacity gaps, and emerging solutions for collaborative, AI-ready data infrastructure. Panelists from the United Nations Development Program, Civic Data Lab, and the German Federal Ministry for Economic Cooperation and Development discussed the transition from merely collecting and digitizing data to leveraging it for real-time, predictive insights, especially highlighted during the pandemic. They emphasized the shortcomings of current approaches, largely stemming from a lack of robust data governance and insufficient participatory mechanisms. Initiatives like Germany's Nefle shared data platform aim to address these barriers by fostering cross-government and cross-country data collaboration, focusing on both technological infrastructure and the cultivation of a data-sharing culture. The session included concrete examples around climate action and administrative social registries, and stressed that sustainable AI solutions hinge on open, interoperable data ecosystems that bridge governments, civil society, and local communities.
- Panel was titled 'Unlocking Impact: Shared Data Platforms for AI Innovation in Development Cooperation.'
- The agenda included a multi-angle panel, a demo of the new Nefle platform (developed by the German Federal Ministry for Economic Cooperation and Development), and a Q&A.
- Panelists included experts from UNDP’s Digital AI and Innovation Hub, Civic Data Lab (India), and the German Ministry for Economic Cooperation.
- Nafia Lam (UNDP): Highlighted the shift in data's role from basic service delivery to real-time, AI-driven pattern recognition; noted pandemic and developments in generative AI as catalysts.
- Current AI readiness in countries focuses on flashy strategies and talent, often neglecting foundational data governance essential for long-term sustainability.
- Nupur Gavde (Civic Data Lab): Emphasized the abundance but underutilization of data due to silos; called for participatory, community-based approaches to make data actionable, using climate action case studies.
- Capacity building and co-creation across governance and community actors cited as keys for effective shared data platforms.
- Dr. Ilia Nickel (German Ministry): Discussed the challenging process of building the Nefle platform for inter-agency and international data collaboration, with EU-funded support; technical capabilities outpace bureaucratic processes, slowing innovation.
- Critical barriers include lack of interoperable infrastructure, cultural resistance in government, and procedural inertia.
- The Nefle platform is positioned as open, cloud-based, and suitable for international collaborations such as with India.
- Approach to overcoming barriers: Begin with open data, enable collaborative prototyping, and foster cultural buy-in via demonstrable, relevant use cases.
Public Data and AI Training: Safeguards for Responsible Reuse
The session, led by Renato of the Open Data Charter and Fola Adelek of the Global Center on AI Governance, focused on the complexities of using publicly accessible data for AI training, particularly regarding legal and ethical boundaries in data protection and copyright. The discussion unveiled an ongoing collaborative research project, supported by Microsoft, examining regulatory safeguards across Brazil, Japan, and Australia for responsible AI use. Speakers highlighted how current laws are often ambiguous about using public data for AI, especially concerning data privacy, copyright, and bias, and stressed the need for new governance frameworks to enable legitimate AI development while protecting individual rights. Audience engagement aimed to gather sectoral perspectives and experiences, forming part of a mixed-method, multi-country study to inform policy recommendations.
- Open Data Charter, a decade-old civil society organization, is reshaping its mission to address intersections between data governance and AI.
- A collaborative research project, funded by Microsoft, is exploring the legal and ethical use of publicly accessible data for AI training.
- Research focuses on data protection, copyright, and emerging AI-specific regulations across Brazil, Japan, and Australia.
- The project aims to identify governance gaps and regulatory uncertainties that affect responsible AI development.
- Initial desk research is complete; surveys and stakeholder interviews are upcoming to capture perceptions from governments and organizations.
- Current copyright and data protection laws offer unclear guidance on text and data mining for AI, especially regarding personal and government data.
- Brazil lacks explicit text and data mining copyright exceptions but has strong data protection and open data policies.
- Japan permits broad information analysis under copyright with some restrictions; data protection laws still apply.
- Bias and data quality issues persist, even with anonymized data, emphasizing the importance of responsible data use.
- Audience engagement through digital tools (e.g., Mentimeter) was used to collect perspectives and inform the research.
Preparing National Research Ecosystems for AI | Strategies, Scale, and Progress
The session presented the launch of the International Science Council’s (ISC) third report examining how national science systems worldwide are integrating AI. The report covers 26 country case studies—including new additions from Egypt, Fiji, Hungary, Kenya, Namibia, Romania, Rwanda, and Singapore—highlighting persistent challenges and emerging solutions. Key unresolved issues identified include gaps in policy guidance specifically for science systems, disparities in compute and data infrastructure, lack of systematic data stewardship, persistent talent shortages, evolving (but sometimes misaligned) science funding models, and overlooked environmental impacts. Kenya’s case revealed an early policy landscape with fragmented AI research efforts, inconsistent funding (mostly from multinationals), and acute challenges in compute access and talent retention. Singapore demonstrated an advanced, government-driven approach, underpinned by targeted AI-for-Science initiatives, strong regional leadership, and development of localized language models like Sea Lion, while still contending with talent shortages. The report steps up as a vital resource for policymakers, institutions, and researchers seeking actionable models and shared learnings on integrating AI into science ecosystems.
- The International Science Council released version three of its comprehensive report on AI uptake in science, now featuring 26 country case studies, with eight new countries added (Egypt, Fiji, Hungary, Kenya, Namibia, Romania, Rwanda, Singapore).
- A 2023 literature survey identified 45 critical issues for AI adoption in science; most remain unresolved.
- Key cross-country gaps include lack of AI policy guidance specific to science systems, inequitable access to compute and data infrastructure, limited attention to data stewardship, ongoing skill shortages, concerns over shifting science funding away from foundational research, and little focus on environmental sustainability impacts.
- Kenya’s AI science ecosystem is in early development: there is no dedicated national AI policy, no sustained government funding for AI research, heavy reliance on foreign funding (Google, IBM, Gates Foundation), and compute access remains prohibitively expensive for most researchers.
- Brain drain is evident in Kenya, as top AI talent is pulled into industry or moves abroad; new education initiatives are emerging, including the country's first university-led GPU cluster and increased AI/DS degree pathways.
- Singapore serves as a regional AI science leader with mature strategies, strong government funding (including a $120M AI for Science initiative), and pioneering efforts like the Sea Lion language model tailored to local needs, yet faces persistent talent shortages.
- The ISC report is positioned as a resource for decision-makers, policymakers, and researchers, offering best practices and lessons learned to inform policy and organizational responses.
The AI Openness Forum: Building Trust Through Transparency
The session at the India AI Impact Summit 2026 focused on the evolving role of AI as a societal coordination layer, moving beyond isolated technological advancements to shaping interactions between complex systems like energy, industry, and public services. Speakers emphasized the need for both national agency and global interoperability, highlighting India's significant investments in developing sovereign AI capabilities attuned to local needs. The session underscored the risks inherent in misaligned AI systems and advocated for shared testing environments, interoperability standards, and collaborative validation models to ensure safe, effective cross-border AI integration. The importance of capacity building, inclusive participation, and mechanisms to protect local priorities while enabling global cooperation was strongly articulated, encapsulated by projects from Cintev and its EU-Africa partnerships that serve as models for collective intelligence and responsible data use. The interactive 'world cafe' segment further fostered collaborative policy design among diverse stakeholders, culminating in a promise to co-author a policy brief outlining actionable next steps for multinational AI governance and impact.
- AI is emerging as a coordination technology for critical societal systems (energy, logistics, climate, industry) rather than just as advanced prediction models.
- India is making major investments in domestic AI compute resources and models rooted in local languages, data, and societal priorities, establishing technological sovereignty.
- Speakers highlighted the risk of systemic instability when misaligned AI systems (trained on divergent data and rules) interface across countries or sectors.
- Centralized global AI control was rejected; instead, shared digital sandboxes, interoperability standards, and cross-border safety validation were proposed as solutions.
- Cintev presented real-world AI coordination projects at multiple levels: Read (adaptive operations), Enfield (trustworthy, energy-efficient AI), Data Pact (sovereign data governance), and Edito (global shared digital infrastructure).
- The EU-funded OpenMod for Africa project was cited as an example of cross-country AI-enabled decision-making infrastructure involving eight African nations.
- The 'world cafe' participatory workshop sought to draft a policy brief through collective intelligence, focusing on principles of agency, trust, and sovereign-friendly AI ecosystem design.
- Panelists acknowledged the challenges of equitable AI access (especially in India's smaller cities) and the necessity for inclusive hardware and compute solutions.
From Idea to Impact: 3 Panels, 9 Solutions, 1 Future | Live from Bharat Mandapam
The session at the India AI Impact Summit 2026 highlighted major initiatives and collaborations aimed at AI-driven inclusion and accessibility, particularly for persons with disabilities in India. Key projects include the A4I AI Innovation and Inclusion Initiative led by Microsoft and IIIT Bangalore, and various AI-powered solutions showcased in six newly launched casebooks. These casebooks document 18 scalable, real-world AI solutions that have significantly enhanced accessibility and autonomy for persons with disabilities. High-level government engagement was evident, with the Department of Empowerment of Persons with Disabilities describing India's evolving legislative and policy framework—most notably the RPWD Act 2016, which aligns with the UNCRPD to guarantee accessibility and inclusion as core rights. The summit also announced substantial budget support for scaling advanced assistive technology production and highlighted specific AI use cases ranging from early diagnosis tools for neurodevelopmental disorders to adaptive learning platforms. Additionally, the 'UI Global Youth Challenge' was spotlighted, underscoring India's commitment to fostering next-generation AI innovators, evidenced by the 2400+ youth submissions from multiple countries. The dissemination of digital casebooks, the recognition of partners, and multiple government and industry participants showcase India's comprehensive and multi-stakeholder approach to leveraging AI for inclusive impact.
- Launch of six AI impact casebooks, featuring 18 deployed and scalable AI solutions focused on real-world impact in accessibility and inclusion.
- Announcement of the A4I AI Innovation and Inclusion Initiative, a partnership between Microsoft and IIIT Bangalore to develop and scale inclusive AI technologies as digital public goods.
- Showcase of AI-powered solutions for blind children in collaboration with NGO Vision Empower, targeting advanced STEM learning and accessibility.
- Government reaffirmed India's commitment to the UNCRPD and highlighted the RPWD Act 2016, expanding recognized disabilities from 7 to 21 and codifying accessibility and inclusion as rights.
- Recent budget announcement for the Sahara Yosna scheme, aimed at a 10x scale-up in production of advanced, AI-enabled assistive devices and R&D support.
- Examples of AI applications include early diagnosis for neurodevelopmental disorders, computer vision for navigation, speech and language technologies, adaptive learning for special needs, and AI-enabled custom orthoses and prosthetics.
- Impactful reduction in average diagnostic age for various disabilities from 4-5 years to as early as 18-24 months using AI diagnostic tools.
- Launch and recognition of the UI Global Youth Challenge: over 2400 AI solution submissions from youth (ages 13-21) across India and internationally, with top 20 teams participating in summit finals.
- Emphasis on ethical considerations, cultural context, informed consent, and data privacy in the design and deployment of AI accessibility tools.
- Distribution of digital casebooks online; physical copies available post-summit for featured authors.
AI for ALL Challenge & Panel on Leveraging AI for Development in the Global South
This session at the India AI Impact Summit 2026 featured visionary leaders from the Vadwani Institute for Artificial Intelligence and former CEO of the National Health Authority, Mr. Indujan. The discussion focused on the rapid advancement of AI from research to real-world deployment and its transformative potential for the global south, especially India. Emphasis was placed on AI's ability to deliver measurable public impact in sectors like healthcare, education, and agriculture—impacting over 125 million people via Vadwani AI initiatives. The speakers highlighted the need for AI integration into government systems rather than isolated pilots, and underscored the importance of equity, governance, and local context in AI model deployment. Risks such as 'pilotism,' scaling without safeguards, and government overreliance on external capacity were identified. The conversation signaled a shift from passive adoption to co-creation and local ownership of AI in the global south, with a recognition that true impact requires regulatory clarity, data governance, leadership, and public sector capacity—not just technical innovation.
- Vadwani AI has deployed over 25 AI platforms in partnership with the Indian government, impacting more than 125 million people since 2018.
- AI-based early childhood education tools have been made mandatory for 8 million children in Gujarat and Rajasthan, with a national rollout targeting 75 million students by 2027.
- AI initiatives now reach over 100 million lives through frontline worker tools and 20 million students with multi-lingual solutions.
- Examples include AI-driven tools for improving literacy, eradicating tuberculosis, and reducing infant mortality rates.
- Key lesson: institutional adoption and integration into government platforms are more important than isolated technical pilots.
- Equity must be a design principle; exclusion from data leads to exclusion from services and justice.
- Governance is critical: AI should assist but not replace government responsibility and human oversight.
- The global south must avoid dependency on foreign-designed AI models and shift towards co-created, locally governed solutions.
- Identified risks: focusing on pilots over public platforms ('pilotism'), scaling without safeguards which undermines trust, and lack of in-house government AI capability.
- Scaling AI for public good requires leadership alignment, data governance, regulatory clarity, and willingness to collaborate and share learnings across countries in the global south.
How India Is Turning Health AI Research into Real Impact
The session addressed the systemic challenges in leveraging AI for healthcare in India, particularly focusing on gender inequities and the limitations of short-term, pilot-driven approaches. Panelists emphasized that while well-intentioned philanthropic and CSR funding is increasing, much of it is short-lived and not integrated into the broader healthcare system, especially in resource-constrained rural and urban communities. Issues such as biased data, patriarchal norms limiting women's access, and lack of sustained funding and community engagement were highlighted. The conversation also underscored the need for governance, continuous certification, and systemic integration of AI, as well as the importance of change management and long-term ownership to ensure scalability and impact. Optimism was expressed regarding nascent policy developments and early signs of progress, but speakers agreed that comprehensive systemic reform, user-centric design, and ongoing capacity building are critical to preventing AI solutions from repeating past failures in digital health.
- Short-term, pilot-driven AI initiatives ('pilotitis') in healthcare have failed to deliver sustained, systemic impact, especially for women in marginalized communities.
- Patriarchal social norms and lack of private digital access impede women's ability to benefit from digital health solutions.
- Many AI for health projects are funded by CSR and philanthropy, but these are often cyclical and lack integration with government systems and attention to unit economics.
- AI tools risk amplifying gender and social biases due to biased data and a lack of inclusive design.
- Long-term effectiveness requires co-development with communities, continual funding, and institutional ownership rather than top-down approaches.
- There is a new emphasis on systematized AI governance, with policies being launched (including in India) to certify, validate, and monitor AI health tools throughout their lifecycle.
- Change management and the preparation of healthcare workers are essential for successful adoption and scaling of AI solutions.
- Client centricity, problem-solution alignment, and system integration are key criteria for funders evaluating new AI healthcare initiatives.
- Despite challenges, early examples (such as AI-driven HIV self-diagnosis in Kenya and cancer screening in Odisha) indicate accessible, impactful AI interventions are emerging.
- Ongoing optimism is tempered by calls for patient, process-oriented progress and a focus on building foundational infrastructure and sustainable financing.
Global Views on AI and Deep Tech Investments in India
The session concluded with a forward-looking perspective on India's AI landscape, emphasizing the nation's current strength in software talent—6 million programmers, primarily working in GCCs (Global Capability Centers) for global AI development. The speaker predicted a substantial evolution in the workforce, estimating that AI-driven productivity could lead to a threefold increase, potentially resulting in 18 million AI agents as India transitions into a global agentic marketplace. The session underscored AI's pivotal role across economic productivity, military strength, and information control, and called for strategic action to position India as a leader in these critical areas. Closing remarks credited the importance of leveraging opportunities and attracting global capital to further India's AI story.
- India currently has 6 million software engineers, many contributing to AI initiatives for global companies via GCCs.
- AI-enabled productivity is projected to increase by 30%; the speaker anticipates a 3x workforce impact, projecting up to 18 million AI agents in India.
- India is positioned to become a global agentic marketplace, driving economic transformation.
- AI is positioned as central to economic productivity, military power, and information control in India's strategic vision.
- Session highlighted the need for policies and opportunities to strategically leverage AI and attract global capital.
Consumers at the Core: Building AI People Trust | Panel Discussion
The session at the India AI Impact Summit 2026 focused on the pivotal theme of consumer empowerment in the age of AI, a topic highlighted as underrepresented in global discourse. Panelists, including former Secretary of Consumer Affairs Mr. Rohit Mahesh Singh, discussed both the significant benefits and mounting risks associated with AI adoption for Indian consumers. Examples ranged from AI-driven services in banking and telecom for fraud detection and regional language accessibility, to alarming cases of deepfakes, hyper-personalization, and invasive data collection. Mr. Singh emphasized the shifting balance of power towards sellers through manipulative AI algorithms (such as dark patterns and algorithmic discrimination) and privacy challenges. He proposed a framework for consumer-centric AI governance with five pillars: transparency, accountability, fairness/non-discrimination, privacy/data protection, and accessibility/inclusion. The session underscored the urgency for India to play a leading, principled, and balanced global role in developing responsible AI that protects consumers, advocating for heightened consumer awareness and rights as foundational to trust in emerging digital economies.
- First panel at the Summit dedicated entirely to consumer empowerment in AI context.
- AI's benefits include 24/7 digital assistance, personalized services, real-time risk monitoring, and accessibility for regional language users.
- Indian banks and telcos actively implementing AI for fraud detection and regional language chatbots.
- Emerging risks: opaque AI decision-making, deepfakes influencing financial decisions, hyper-personalized targeting, and algorithmic 'hallucinations.'
- Case highlighted: deepfake videos using finance minister's image to manipulate consumer financial actions.
- Manipulative 'dark patterns' in online commerce and algorithmic discrimination (for credit scores, insurance, pricing) proliferate on Indian platforms.
- Direct example: price discrimination and manipulation by e-commerce and service apps based on user device, location, and urgency signals (like low battery).
- Five policy pillars proposed: transparency, accountability, fairness/non-discrimination, privacy/data protection, and accessibility/inclusion.
- Call for increased consumer awareness and knowledge of digital rights as equally important as empowerment.
- India positioned to provide a 'non-aligned', people-centric model for responsible AI governance globally.
- Summit urged all stakeholders (government, industry, academia, civil society) to collaborate on actionable regulatory and awareness-building steps.
Global Views on AI and Deep Tech Investments in India
Reimagining Women’s Inclusion in the Future of Work
This session at the India AI Impact Summit 2026 spotlighted the intersection of women's workforce participation and the rapidly transforming digital, AI-driven economy in India, particularly in rural areas. Panelists highlighted both the vast opportunities and persistent structural challenges, such as siloed skilling initiatives, lack of last-mile job connectivity, and gender-based disparities in career progression. Data was shared indicating that, while AI and digital infrastructure have enabled millions of marginalized women to join new work paradigms—ranging from frontline public service to roles in AI data ecosystems—systemic barriers around mobility, job aggregation, and job quality persist. Notably, successful models like NextWealth, which focuses on job generation for women in small towns using human-in-the-loop AI processes, exemplify the scalability and sustainability of such efforts, having already employed thousands of women and projecting exponential job growth in AI fields. The panel called for bold, collaborative action across government, industry, and civil society to move beyond fragmented initiatives and build cohesive ecosystems that recognize, connect, and elevate women in the emerging digital workforce.
- Women's digital economy jobs in India span from frontline public service to AI data and content ecosystems, but are often unrecognized, unprotected, and underpaid.
- There are 22+ government schemes focused on skilling women, but these operate in silos and often do not connect women to actual jobs.
- Most skilling is supply-side (where women are), rather than demand-driven (where jobs are), creating a mismatch and mobility challenges.
- Women face additional barriers to labor market participation due to lower mobility, time constraints, and caregiving responsibilities.
- Recommendation: Establish and invest in demand-side aggregator platforms to connect women where they are to the available job market, similar to models used in microfinance.
- NextWealth's 'human-in-the-loop' model employs 60% women, across 11 centers, with 5,000 current jobs and 20,000 total created, focusing on AI data annotation and related fields.
- Projections discussed include 92 million jobs to be automated, but with 170 million new jobs created and $300–400 billion in new AI service sectors, per Infosys investor analysis.
- Panel emphasizes the risk of bias and hallucination in AI if the workforce and data creation remains unrepresentative, advocating for greater inclusion of women.
- Call to action: Multi-sector collaboration to accelerate women's participation, engagement, and progression in the digital/AI workforce at scale.
Decisions at Speed: How Data Intelligence Powers Sovereign & Enterprise AI
The session at the India AI Impact Summit 2026 focused on the rise and foundational importance of 'sovereign AI' for India, highlighting the efforts of major industry leaders—Nvidia, DDN, and Yota—in building the nation's AI infrastructure. Speakers emphasized sovereign AI as a set of principles driving the localization of technology, data, models, and services to strengthen national capabilities, autonomy, and security, while fostering economic competitiveness and preserving local languages and cultures. The session detailed the critical pillars required for building sovereign AI: government strategy and funding, local consumers, technology providers, hosting/service providers, and data custodians. Speakers outlined the phased approach to developing sovereign AI, beginning with national data repositories, ensuring data is AI-ready, developing contextually relevant AI models, and building scalable citizen-centric services. India’s focus on initiatives like Make in India and investments in local language model development was cited as paving the way for both domestic growth and the eventual export of AI-powered products and services. The session underscored the need for robust infrastructure and data stewardship at scale to address unique national use cases, such as UPI, Aadhaar, and potential population-scale genomics.
- Sovereign AI defined as localizing AI development, data, and infrastructure to strengthen national autonomy and competitiveness while collaborating globally.
- India's sovereign AI efforts are spearheaded by Nvidia, DDN, and Yota, emphasizing both computing power (GPUs) and robust domestic data infrastructure.
- Five pillars of sovereign AI: 1) government leadership and funding, 2) local consumers/industries, 3) technology providers (chip, data platform, data center vendors), 4) hosting/service providers, and 5) data custodians.
- Data custodians are essential for safeguarding and managing critical national data (e.g., UPI, Aadhaar, ISRO satellite data).
- Key drivers for sovereign AI include national security, data sovereignty, economic competitiveness, technology ownership, language/culture preservation, and socioeconomic growth.
- India is investing in 12+ language model builders to infuse local language, culture, and domain knowledge into AI models.
- The development stack for sovereign AI: build national data repositories, make data AI-ready (annotation/labeling), develop indigenous models, and deploy scalable citizen services.
- India's Make in India initiative is closely tied to technology self-reliance in AI.
- Population-scale use cases (e.g., genomics, UPI, Aadhaar) require massive, purpose-built infrastructure and coordinated data strategies.
- Long-term goal: Exporting AI-powered products and services developed with sovereign capabilities.
AI for the Last Mile | Driving Social Empowerment through Accessibility
This session at the India AI Impact Summit 2026 focused on how AI is democratizing access to technology and education in India, particularly for marginalized and rural populations. Panelists discussed the transformative power of AI to bridge the longstanding digital divide—bringing sophisticated tools, content, and assistance in regional languages to areas previously lacking even basic infrastructure. The conversation highlighted real-world examples of AI being used by non-experts, such as delivery drivers optimizing their routes and senior citizens using conversational AI for self-education. The panel delved into the evolving purpose of education in the AI age, emphasizing a shift from rote, tool-based training to fostering idea generation, ethical understanding, and social skills. They acknowledged that while on-demand knowledge is now universally accessible, wisdom, empathy, and adaptive human skills remain critical and irreplaceable. The session also explored the continued relevance of branding and networks provided by elite educational institutions but anticipated a shift in how their value is recognized. Ultimately, the panel agreed that the future of education must blend traditional learning objectives with the new possibilities unlocked by AI, ensuring that both urban and rural learners are empowered for the coming era.
- AI-driven innovations are now reaching rural and previously underserved areas, leveraging regional languages and digital kiosks (CSCs).
- Tools like ChatGPT have enabled all individuals—including drivers, senior citizens, and those with limited English skills—to become empowered users and learners.
- The future curriculum must shift from rote learning to prioritizing ethics, social skills, first-principles thinking, and creative idea generation.
- Panelists emphasized the need for education to infuse wisdom and foster social learning, not just deliver knowledge.
- Access to elite branding and networks from institutions like INSEAD and IIM retains some value, but career catapulting is now possible independently of traditional pathways.
- Hiring expectations are evolving: young professionals and students are now expected to use AI in their workflow, and prompt engineering is increasingly fundamental.
- Marginalized and rural students should continue investing in their education, but institutional curricula must rapidly adapt to maximize AI benefits inclusively.
- The digital divide is narrowing, enabling rural innovators to access and contribute their unique environmental and contextual wisdom.
- Human elements—especially empathy, wisdom, and social intelligence—remain irreplaceable in an AI-dominated educational landscape.
AI-Powered Early Warning Systems: Protecting Vulnerable Communities| India AI Impact Summit 2026
In his session at the India AI Impact Summit 2026, Malik, a faculty member at TUS, presented collaborative research with Rohini Pande on leveraging AI-powered flood early warning systems to reduce flood risk for vulnerable households in Bihar. The talk highlighted both the impressive advances in AI-based flood forecasting by Indian authorities (CWC) and Google—delivering highly accurate, timely, and granular alerts—and the persistent challenge that these forecasts rarely reach rural households most at risk. Less than 20% of vulnerable households in Bihar reported receiving official alerts from either CWC or Google, exposing a critical 'last mile' gap between technology and impact. Malik discussed two approaches to closing this gap: one using local community leaders (Panchayat Mukhia), which proved ineffective due to lack of engagement, and a second, successful model using dedicated and trained community agents who utilized both traditional methods (loudspeakers, flags) and modern channels (SMS, WhatsApp) to disseminate real-time alerts. Over four years, this agent-based model boosted alert penetration and holds promise as a scalable solution to bridge the final divide between advanced forecasting and actionable local warning.
- Flood risk in Bihar has risen sharply over the past 25 years, with the state accounting for 18% of India’s flood-prone area; over 390 million Indians are at risk.
- AI-based flood early warning systems by India's Central Water Commission (CWC) and Google offer highly accurate (95%+) and timely (2-5 days in advance) forecasts.
- Despite technological advances, less than 20% of households in flood-prone communities reported ever receiving an official alert from either CWC or Google.
- Traditional forms of dissemination (TV, radio, SMS, press, and even local leaders) fail to systematically reach vulnerable rural populations.
- A four-year study involving 5,500 households in 160 flood-prone panchayats found a persistent gap: most at-risk households did not receive actionable warnings.
- A local leader-based delivery model (relying on Panchayat Mukhia) had no measurable impact: leaders often ignored or failed to relay alerts.
- A community agent model—deploying, training, and compensating local agents to relay alerts through loudspeakers, WhatsApp, SMS, and flags—succeeded in boosting alert penetration and prompt action.
- The community agent model blended trust, local knowledge, multiple communication channels, and accountability, proving to be an actionable solution for last-mile delivery of AI-powered alerts.
AI from India to the World
This closed-door session at the India AI Impact Summit 2026, featuring renowned founder-investor Vinod and moderated by Mohit from Peak 15 Partners, delved into the transformative impact of AI on India's economic, social, and policy landscape. The conversation highlighted the exponential growth of interest in the country's AI ecosystem, signified by record-breaking summit registrations and active participation from both founders and investors. Vinod commended India's forward-thinking approach in developing sovereign AI models, such as Saram, and noted the strategic importance of these initiatives for national security and technological self-sufficiency. He identified practical, immediate applications of AI in public services: the deployment of AI-driven healthcare (AI doctors), education (AI tutors), and agricultural advisory (AI agronomists), which could provide essential services to hundreds of millions at low cost, deeply leveraging India's digital infrastructure like Aadhaar and UPI. Looking internationally, Vinod forecasted that legacy IT services and BPO industries would face obsolescence by 2030, replaced by higher-value, AI-powered services and exports. On a societal level, he argued that AI's increasing capabilities would democratize access to high-level knowledge (medicine, education, law, and beyond), drive down costs, and foster global shifts in labor and value creation. With regulatory acceptance of AI in medicine already emerging in places like Utah, the session underscored the urgency and scale at which these changes are arriving, both in India and worldwide.
- Record interest in India's AI Impact Summit 2026 with reports of 300,000+ registration attempts, indicating massive ecosystem engagement.
- Indian government commended for progressive AI policies: strategic focus on sovereign AI models (e.g., Saram), critical for national defense and cybersecurity.
- Vinod's estimate: providing daily primary AI-based healthcare to 700 million Indians could cost less than $2 billion per year—drastically lower than current spending.
- Envisions AI-enabled public goods at scale—AI doctors, tutors, local-language agronomists—integrated with existing nationwide digital platforms (Aadhaar, UPI).
- Projects that by 2030, legacy IT services and BPO as standalone industries will disappear, replaced by AI-driven knowledge and service exports.
- Cites rapid advances in AI-generated custom solutions, such as patient-specific drug design, and forecasts widespread deployment in coming years.
- Predicts AI will equalize knowledge distribution: construction workers and oncologists could access the same expertise via AI, shifting the value to problem specification rather than rote knowledge.
- Notes regulatory acceptance of AI in medicine (e.g., Utah allowing AI prescriptions in 2026), signaling imminent disruption in healthcare delivery.
- Advocates for free or near-free access to AI-powered legal, educational, and entertainment services for all socio-economic segments.
- Warns of massive economic and professional disruption, urging Indian IT and BPO sectors to transition, and highlights India's unique opportunity given its engineering talent and digital ecosystem.
Flipping the Script: Re-Coding the AI Economy for the Global Majority
The session at the India AI Impact Summit 2026 showcased a wave of innovative Indian startups harnessing AI for domains such as legal dispute resolution, embedded software, GPU hardware, edge deployment, and defense. Several key investments were announced, reflecting robust venture capital interest in India's AI ecosystem. Startups like Runstack are automating and accelerating the traditionally slow legal process, while Turyam is developing high-efficiency, sovereign GPUs, and Craftify is revolutionizing embedded AI development, cutting costs and manpower by up to 70%. Defense-focused Armory demonstrated advanced counter-drone AI systems, already securing significant orders from the Ministry of Defence. Yubic Edge is enabling non-programmers in field deployment of AI, supporting thousands of real-world use cases. The session was rounded out by a panel on AI for social good, spotlighting the economic potential of AI ($15.7 trillion globally by 2030), the global south's opportunity to leapfrog development through responsible, inclusive AI, and urgent calls to boost AI literacy in these regions, so their large youth populations can fully benefit from upcoming AI-driven growth.
- Runstack automates 90% of litigation workflows, reducing dispute resolution time from 600 to an aim of 60 days; has the High Court and Karnataka government as clients.
- Turyam claims to deliver GPUs with 5-10x better performance per watt compared to Nvidia, advocating sovereign AI hardware stacks; open source AI models nearing closed source model efficiency for less than 70B parameters.
- Craftify's embedded agent platform saves 70% in development costs and shrinks typical development teams from 10 to 2 members across edge/IoT devices.
- Armory unveiled an AI-based counter-drone solution 'Surge' with 5 km detection/3 km jamming, already securing INR 100 crore in orders from India's Ministry of Defence.
- Yubic Edge's Sama/Cleon platforms enable prompt-based hardware/software deployment in Hindi/English, scaling to over 25,000 deployments and enabling 5,000+ real-world use cases.
- Significant VC investment: Ankur Capital Fund invested INR 32.7 crore across two startups, Antler Innovation Fund 1 invested INR 21.8 crore, Piper Erica invested INR 34 crore.
- Panel discussion highlighted that the global AI economy will generate $15.7 trillion by 2030, but only 11% of this is projected to go to Latin America, Africa, Oceania, and parts of Asia.
- Stressed the importance and opportunity for the global south—home to a majority of the world’s youth—to lead in developing ethical, responsible, and sustainable AI.
- Urgent call to build AI literacy in the global south so citizens can both leverage and critically evaluate AI systems.
Building India’s AI Governance Architecture
The session at the India AI Impact Summit 2026 focused on the design and diffusion of Digital Public Infrastructure (DPI) for artificial intelligence (AI), with a spotlight on promoting affordable, accessible, multilingual, and socially beneficial AI architectures. The discussion featured global perspectives from Japan, the United Nations, and Brazil to understand best practices and challenges in AI deployment and governance. Japanese experience highlighted the shift from specialist education to user empowerment and agile, feedback-driven governance rather than rigid regulation. The United Nations emphasized balancing digital sovereignty with international cooperation, leveraging open, interoperable DPI to foster innovation and address local and global challenges. Brazil's approach underscored building a domestic AI ecosystem focused on sovereignty by investing in public supercomputing, leveraging rich public datasets, and providing MSMEs with access to critical infrastructure. Common threads included the need for robust public infrastructure, user-focused education, collaborative international frameworks, and ensuring inclusion—particularly for MSMEs, women, and underserved populations—to democratize AI adoption and avoid an inequitable digital society.
- The session was structured in two panels: learning from global experiences and adapting those lessons for India's AI DPI over the next 12–18 months.
- Japan's government strategy is shifting from rigid regulation to agile, feedback-based AI governance, and expanding AI literacy beyond specialists to general users through institutions like IPA and University of Tokyo’s GCI program.
- Japan noted its lack of a structured DPI and expressed intent to learn from India’s DPI initiatives such as DEPA.
- United Nations viewpoint stressed that DPI principles—interoperability, openness, modularity—enable a balance between national sovereignty and international digital cooperation.
- India’s DPI stack serves as a model for AI adoption: sovereign core infrastructure enabling public interest goals, with APIs and benchmarks for startups to innovate.
- Global approaches discussed include independent international scientific panels for neutral AI assessments and global capacity building, particularly for the Global South.
- Concerns over compute power and talent outflow were raised, emphasizing the importance of democratizing smaller, context-specific AI models (SLMs) instead of relying solely on massive models and hardware.
- Brazil is focusing on sovereignty and ecosystem development, investing in public supercomputing, open software stacks, and public datasets to empower domestic researchers, MSMEs, and startups.
- Brazil’s AI plan includes making public infrastructure accessible to smaller players to reduce dependence on multinational technology firms.
Governing Autonomy: Trust in Agentic AI Systems
This session at the India AI Impact Summit 2026 brought together leading voices from technology, investment, and standards to discuss the new frontier of 'agentic AI'—AI systems capable of autonomous agency and decision-making. The panel highlighted the emerging landscape of multi-agent AI environments, drawing distinctions from traditional AI and emphasizing the unique governance and trust challenges they pose. Key themes included the need for foundational trust mechanisms, governance models specifically designed for agentic autonomy, continuous runtime oversight, agent identity certification, and the integration of governance as a product feature rather than a mere compliance afterthought. With concrete models like the '5-layer governance stack' and strong calls for transparent, accountable, and human-in-the-loop approaches, the session underscored that building robust, trustworthy AI ecosystems requires cross-sector collaboration and a holistic mindset toward risk management and accountability. Governance is rapidly becoming not only a regulatory necessity but also a market differentiator as enterprises and investors demand ‘governance inside’ for all autonomous AI products.
- Agentic AI and robotics are moving toward a spectrum of autonomous agency, from basic chatbots to fully autonomous vehicles.
- Project Nanda is developing foundational infrastructure for an 'internet of AI agents,' focusing on embedding trust and governance primitives at the core architectural level.
- New governance paradigms are required: traditional post-hoc controls are insufficient—governance must be built into agentic systems as a primary design principle.
- Runtime governance (continuous monitoring and control of agentic behaviors) is critical due to the dynamic and networked nature of AI agents.
- Agentic identity and certification is a rising theme: identity mechanisms (akin to digital passports) certify an agent's origin, training data, ownership, and behavior.
- The IT Standards Association highlights standards for data transparency, age-appropriate design, and robust, accountable logging mechanisms to ensure AI agent trustworthiness.
- Investment perspective: Insight Partners manages $90B AUM and evaluates startups on their integration of a '5-layer governance stack'—covering build, deploy, runtime, remediation, and accountability.
- Governance is becoming a product and go-to-market (GTM) advantage, with enterprises demanding traceability, auditability, and kill-switch mechanisms as prerequisites—even for smaller contracts.
- Startups are under growing pressure to address governance requirements early, as client and investor expectations for compliance and risk management intensify.
AI for the Global South: From Governance to Inclusion
The session at the India AI Impact Summit 2026 emphasized India's unique approach to artificial intelligence governance and deployment, highlighting three foundational pillars: avoiding restrictive EU-style legislation, adopting soft-touch regulations to balance safety with innovation, and prioritizing welfare-oriented outcomes over risk aversion seen in the Global North. Leaders stressed democratizing AI to reach the most marginalized, using digital public infrastructure (DPI) to scale AI benefits for local populations—such as farmers, gig workers, and students—and serving as a global model for the Global South. The discussion also explored the importance of development-centric AI, local adaptation through state- and district-level initiatives, the integration of massive datasets (Aadhaar, Jandhan, Digilocker, UPI) into AI-powered public services, and fostering South-South collaboration. International panelists underscored the need for AI solutions driven by user development needs, digital sovereignty, and cross-country cooperation within frameworks like BRICS, advocating for approaches that respect national differences while leveraging collective strength for equitable AI advancement.
- India's AI strategy is based on three pillars: (1) refraining from stringent, EU-style legislation, (2) employing soft-touch regulations that balance safety without stifling innovation, and (3) taking a welfare-outcome approach rather than a solely risk-averse one.
- India aims to democratize AI benefits, ensuring they reach marginalized groups like farmers, expectant mothers, students, and gig workers.
- Digital Public Infrastructure (DPI)—comprising Aadhaar, Jandhan bank accounts, Digilocker, UPI, and more—enables population-scale AI solutions for efficient and targeted welfare delivery.
- State and district governments in India are independently leveraging digitization and localized data to deploy hyper-local AI-driven solutions.
- Example use case: Using AI to create credit histories for small vendors via UPI transaction data, shared in vernacular languages to enhance financial inclusion.
- BRICS countries, under India's presidency, have established an AI task force and released a high-level statement on AI governance, with an emphasis on digital sovereignty and respecting national policy choices.
- Panelists advocated for development-oriented AI—prioritizing education, healthcare, and local solutions over competitive benchmarks like the largest language models.
- There is a strong call for South-South cooperation, leveraging India's model for development-focused, user-needs driven AI and digital sovereignty.
- AI is positioned as a tool to make governance more data-driven and proactive, suggesting India’s approach can be a model for other Global South countries, especially those in the African Union.
The Sociotechnical Turn in AI Governance
The session at the India AI Impact Summit 2026 featured two prominent speakers: Professor Virginia Dicknam, Professor of Responsible AI at UMAI University, Sweden, and Harry Pis, the Netherlands' Ambassador for AI. Professor Dicknam emphasized the critical social-technical aspect of AI development, stressing that AI is constructed through human choices, governance, and incentives rather than being an uncontrollable force. She highlighted that effective AI governance involves more than technical advances; it must integrate social and societal implications, address who benefits and who bears costs, and build systems driven by questions of 'why' rather than just 'how.' Harry Pis presented the Netherlands' response to these challenges, sharing lessons from significant failures like the country's automated child benefits scandal to illustrate the need for robust governance. Pis outlined the Dutch approach emphasizing transparency, stakeholder collaboration, and societal impact, including initiatives like a public AI registry, participatory co-development of impact assessments, investments in national AI infrastructure, and education. Both speakers agreed on the urgency of meaningful, tangible multi-stakeholder collaboration and the transitioning of the AI ecosystem from principle-based declarations to embedded, institutional practice, underlining that governance structures must evolve in pace with technology and involve diverse participants to ensure trustworthy, responsible, and beneficial AI.
- AI development must be guided by social and human impact considerations, not just technical advancement.
- Professor Dicknam calls for governance structures that ask 'why' AI should be used and stresses the necessity of integrating social-technical expertise from the start.
- AI is not neutral or inevitable—governance must emphasize accountability, oversight, and responsibility.
- Overreliance on available data can marginalize important issues lacking in quantifiable data and lead to 'insight poverty.'
- The Netherlands has established a dedicated AI task force and an open registry documenting over 1,350 public sector AI systems from 320+ public bodies.
- Dutch policy framework involves multi-stakeholder collaboration and is co-developed with practitioners and researchers, with public consultation and transparency as cornerstones.
- The impact assessment approach is being updated to address generative AI and integrates human rights considerations, based on EU AI regulations.
- 12 'Elsa' labs (Ethical, Legal, Societal Aspects) in the Netherlands focus on real-world impact in domains like healthcare, public safety, and food systems.
- The Netherlands is investing in an 'AI factory' for research/public access to computing and in national language models (GTPNL) built on trusted public data.
- 80 million euros from the national growth fund are allocated to AI education and literacy via NOL AI, a national educational lab.
- Both speakers stress that true AI governance requires international cooperation, cultural sensitivity, ongoing dialogue, and a commitment to learning from failures.
- Speakers call for practical translation of social-technical insights into policy and design, emphasizing adaptive, participatory approaches over abstract declarations.
Reskilling for Tomorrow: AI and India’s Jobs Transition
The session at the India AI Impact Summit 2026 tackled the complex, intersecting challenges posed by AI-driven labor market disruptions, climate change, and socioeconomic inequalities, emphasizing the insufficiency of a siloed, sector-focused approach. Speakers argued for localized, nuanced policy responses and stronger social protection mechanisms, referencing historical precedents in wealth redistribution and global migration. They highlighted new research indicating a slowdown in hiring but an overall increase in job growth and productivity among Indian firms adopting AI. Upskilling and reskilling through initiatives like the OpenAI Academy were identified as critical pathways to adaptation, and India's complex, multilingual context was flagged as an opportunity for innovative, context-specific AI solutions. The panel advocated optimism tinged with caution, stressing the need for creativity, forward deployment engineering roles, and the development of local datasets as emerging strengths and opportunities for India's workforce.
- Global governance gaps challenge effective redistribution of wealth in response to technological disruption; existing institutions are inadequate for transnational transfers.
- Migration is an inextricable part of managing labor market transitions—from both economic and environmental pressures.
- CW research identified potential for 48 million new jobs across 36 green economy value chains in India.
- Over 90% of India's employment is informal; disruptions in the small formal sector can have disproportionately large ripple effects throughout the economy.
- Recent OpenAI and IRA survey of 650 Indian companies shows slowed hiring but increasing overall job growth, with productivity gains due to AI adoption.
- OpenAI Academy offers free upskilling and certification programs, with collaborations planned with universities and experts to improve AI employability.
- Panelists stressed the higher value placed on 'green skills' and 'AI skills', and the importance of making skilling locally relevant.
- India's demographic and skill diversity, along with its complexity and multilingual setting, are viewed as unique advantages in the AI-driven transition.
- Emergence of new roles such as Forward Deployment Engineers and local data evaluators predicted as key growth areas.
- Speakers advocated optimistic adaptation, urging young professionals to proactively acquire AI-enabled skills rather than fearing job loss to automation.
How AI Collaboration Can Power India’s Digital Future @2047
The session opened the India AI Impact Summit 2026 with a strong focus on collaboration, inclusivity, and rapid technological adoption. Maruti Suzuki's leadership underscored AI's transformative potential in both corporate and national contexts, aligning with India's ambition to emerge as a global AI leader. The speakers articulated Maruti Suzuki's approach of embracing AI across its value chain—not just for operational efficiency, but to enrich customer experience and drive authentic, responsible innovation. There was a clear emphasis on working closely with startups, academic institutions, and government initiatives (notably, the India AI Mission), leveraging substantial public investments to accelerate scale and democratize AI solutions. Maruti Suzuki highlighted its rapid expansion goals—aiming to multiply production capacity several times in the next two decades—underlining the critical role of AI-fueled agility and ecosystem partnerships. The session established a tone of open innovation and signaled large-scale transformation underway in India's mobility and manufacturing sectors, powered by AI.
- Maruti Suzuki is collaborating extensively with startups, consultants, and tech firms to embed AI in their innovation and operational processes.
- The summit theme, 'Happiness for All, Welfare for All,' resonates with the inclusive and transformative aspirations of India's AI journey.
- India AI Mission is receiving over ₹10,300 crore in public funding for GPU infrastructure, startup financing, platforms, datasets, and indigenous models.
- Maruti Suzuki aims to multiply its vehicle production from 4 million today to 24–30 million by 2047—implying a six-fold increase in half the historical timespan.
- AI adoption is central to customer experience (personalization, seamless journeys) and manufacturing (predictive maintenance, quality inspection) at Maruti Suzuki.
- Responsible AI frameworks have been implemented to ensure authentic, meaningful impact rather than following AI hype.
- Maruti Suzuki has evaluated 6,200+ startups, engaged with over 200, and formed business partnerships with 32; 60 startups are conducting paid pilots.
- Agility and speed in innovation (likened to landing a fighter aircraft, not a passenger jet) are now seen as strategic advantages in the AI era.
- India seeks to double automotive sector’s GDP contribution—with ambitions to reach 49% of manufacturing GDP by 2030.
- The company has launched open innovation initiatives in collaboration with IIM Bangalore, IIM Kolkata, and other ecosystem partners.
Human Flourishing in the age of AI
The session opened with reflections on the need for a structured, globally inclusive approach to managing the societal upheavals brought by rapid AI advancements, emphasizing human flourishing as a core goal. A diverse expert group, including industry, academia, philanthropy, and civic society, achieved consensus on the urgency of designing, rather than assuming, positive transitions for all people, given AI’s non-linear acceleration. Key risks discussed include the 'curing trap'—where increasing dependence on AI outpaces human capability growth—and the danger of unequal gains between those with and without agency or resources. The divide between the Global North and South was underscored, noting that while the North is building robust frameworks, the South, including India, is left improvising survival architectures without adequate scale. In response, the session announced plans to establish a neutral, India-anchored 'Institute for Human Flourishing' focused on AI transition infrastructure, equipped to aggregate data, develop responsible transition frameworks, localize solutions, and support rapid skilling. The institute seeks global partnerships to address unique contexts, especially for lower-income and informal labor sectors, and emphasizes equity, localization (especially multi-lingual and voice-first design), and knowledge-sharing as essential pillars. The session concluded with the formal (physical and soon, public) launch of a comprehensive report and the introduction of an esteemed, globally representative panel to further discuss strategies for inclusive, equitable AI transitions.
- Consensus among diverse experts that positive human outcomes in the AI era must be actively designed, not left to chance.
- AI capability is increasing at a pace (potentially 50x in 18 months) far surpassing human adaptation, demanding carefully managed transitions.
- Core risks identified include the 'curing trap' (over-reliance on AI without corresponding human skill growth) and unequal benefit accrual favoring those with higher agency, skills, or capital.
- The Global South (including India) is lagging the Global North in developing scalable AI frameworks, mostly operating in pilot/survival mode.
- Announcement of the forthcoming 'Institute for Human Flourishing'—a neutral, India-anchored global institution to manage societal AI transitions, emphasizing a Global South perspective but involving global stakeholders.
- Key institute pillars: neutral data aggregation for public good, responsible transition framework development, localization and scalable solutions, accelerated skilling initiatives, and orchestration to avoid duplicative efforts.
- Unique challenges for the Global South highlighted: affordability, public system roles, multi-lingual and voice-first user bases.
- Next steps: institute rapid launch, foundational partner engagement, initial governance and data coalition formation.
- Physical and soon-public release of an expert group report summarizing findings and recommendations.
- Panel constituted of senior representatives from BCG, Microsoft, ServiceNow, the Gates Foundation, UNICEF, and leading local data and education advocates to debate inclusive strategies for low-income and informal labor contexts.
Advancing Safe & Equitable AI in Healthcare Systems | India AI Impact Summit 2026
The session at the India AI Impact Summit 2026 critically examined the state of AI deployment in healthcare, highlighting both the promise and pitfalls of current approaches. Speakers underscored that while AI tools such as chatbots, predictive algorithms, and ambient documentation systems are rapidly transforming healthcare delivery—from neonatal care to disease surveillance—the evidence base for their true impact lags behind technical innovation. Persistent methodological gaps and insufficient attention to the real-world context threaten to repeat mistakes seen in two decades of digital health deployment, where interventions failed due to poor contextual fit and limited adoption by end-users. Panelists from Johns Hopkins, ICMR, WHO, ISB, and the Gates Foundation advocated for a new, intentional approach to AI evaluation—one that prioritizes user adoption, operational context, and intermediary outcomes, rather than just technical metrics like model accuracy. WHO’s push for a standardized AI taxonomy to enable effective and trustworthy evaluations was highlighted as foundational. The consensus was clear: cross-disciplinary collaboration, nuanced standards, and pragmatic evaluation metrics are essential for realizing AI's full impact on health equity and outcomes, particularly in resource-constrained environments.
- AI tools (chatbots, predictive models, ambient AI scribes) are increasingly used in areas such as neonatal care and palliative medicine.
- Evidence base and real-world validation are not keeping pace with AI innovation; persistent methodological gaps identified.
- Johns Hopkins' 15-year analysis shows digital health tools often fail due to poor context fit—user population characteristics determine impact more than model performance.
- Current AI evaluations overemphasize model accuracy, insufficiently address frontline usability, trust, infrastructure readiness, or health equity implications.
- WHO is leading efforts to standardize AI evaluations through a global taxonomy, ensuring interventions are categorized, replicable, and trustworthy.
- Critical call to develop pragmatic, context-aware evaluation approaches focusing on proximal outcomes, adoption metrics, and opportunity costs.
- RCTs may not be feasible; rapid AB testing, embedded impact teams, and interdisciplinary collaboration recommended.
- AI evaluation must focus on genuine problem-solving rather than seeking problems to fit pre-built solutions.
- Consensus that cross-sector collaboration—including regulators, funders, policymakers, implementers, and end-users—is essential to bridge the evidence gap and realize AI’s promise responsibly.
Powering Quantum Technologies with AI: US–India Collaboration
This session at the India AI Impact Summit 2026 explored the synergy between artificial intelligence (AI) and quantum computing, emphasizing how the intersection of these domains can unlock unprecedented computational capabilities for industries. The session began with a keynote that established the importance of AI-quantum collaboration, especially in high-impact sectors like drug discovery, logistics, financial risk, and fraud detection. It highlighted the US-India tech partnership, collaborative pilots, and infrastructure building as essential next steps. Distinguished panelists from academia, industry, and research, including IBM Research India, L&T Semiconductor, and Quad Optima, discussed practical applications, policy frameworks, supply chain resilience, and the importance of talent and regulatory alignment. The importance of building quantum-AI centers of excellence, enhancing supply chains, and launching joint skilling and research initiatives was underscored. Overall, the session signaled both the promise and necessity of deepening AI-quantum ties and US-India collaboration to drive the next wave of trustworthy, scalable technological advancement.
- The session focused on 'Powering Quantum Computing with AI,' emphasizing the complementary nature of these technologies—quantum makes AI bolder, and AI stabilizes quantum.
- AI can dramatically accelerate quantum error correction and orchestration, enabling more reliable hybrid workflows and unlocking new avenues for optimization, anomaly detection, and generative potential.
- India-US collaboration was highlighted as a driving force, aligning both countries' national priorities on AI and quantum under shared tech and research agendas.
- Concrete proposals included running joint pilot projects in sectors such as drug discovery, financial risk analysis, and logistics, as well as establishing quantum-AI centers of excellence pairing US and Indian research hubs.
- India's National Quantum Mission (NQM) and semiconductor initiatives are pivotal, leveraging demographic strengths and institutional expertise (IISc, IITs) with US hardware, cloud, and standards leadership.
- Supply chain resilience for quantum technologies, including semiconductors and control electronics, was identified as a key policy focus along with risk-based regulatory approaches and legislative support (e.g., US National Quantum Initiative Act).
- Panel input called for expanded skilling and research exchange programs, embedding quantum and AI into curricula, and fostering public-private partnerships to build a robust, scalable ecosystem.
Culture and Code: Creative AI for Equitable Development
The panel session at the India AI Impact Summit 2026 brought together global experts to deeply explore the impact of AI on human creativity and cultural expression. The conversation centered around tensions between AI's potential to democratize creativity, making it accessible to many, versus the risk of cultural 'flatness'—a sameness emerging from algorithm-driven mainstreaming. Panelists like philanthropist Vilas Dhar emphasized that cultural homogeneity predated AI and is rooted in historical and systemic trends, yet AI can either reinforce or disrupt this pattern, depending on how inclusively and thoughtfully its systems are designed and governed. Oscar-nominated filmmaker Shekhar Kapur and neuroscientist Prof. Olivier highlighted the irreplaceable elements of human creativity—such as unpredictability, play, emotion, and hope—that AI cannot wholly replicate. They argued that AI's impact will depend on whether humans retain their own innate drive for curiosity and idiosyncrasy, and on how AI systems are trained (e.g., beyond just text toward multimodal, experiential data) and whose cultures they embody. The prevailing sentiment was that AI holds the promise of expanding creative participation globally but must be developed with a focus on preserving diversity, fostering hope and unpredictability, and measuring impact chiefly by outcomes that enhance human dignity and uniqueness.
- Panelists represented diverse perspectives: documentary filmmaking, brain-science-driven creative technologies, AI philanthropy, and film direction.
- The core debate revolved around whether AI will foster greater creative diversity or lead to cultural homogenization ('flatness').
- Vilas Dhar argued that cultural sameness is a pre-existing global trend, not inherent to AI, and that inclusive, community-driven training of AI can reclaim local and indigenous voices.
- Algorithms tend to reinforce mainstream tastes but could be redesigned to promote eccentricity and meaningful micro-cultures.
- Shekhar Kapur emphasized that true creativity is rooted in unpredictability, play, hope, and the emotional experience of being human—traits difficult for AI to emulate.
- Maintaining childlike curiosity and avoiding creative inertia are seen as key in resisting the formulaic tendencies of AI-driven creation.
- Prof. Olivier insisted that AI is not monolithic; the richness of outputs depends on diversity in both training experiences and modalities (beyond just language to include movement, expression, and collective history).
- AI's future development should focus on measuring impact by meaningful, ethical, and socially valuable outcomes—emphasizing human dignity and uniqueness rather than just technical performance.
Teacher-Led, Localised AI for Equitable Education | Global Roundtable
The session at the India AI Impact Summit 2026 brought together Indian and international education leaders to discuss the launch of the 'Compact for Frugal AI for Inclusive Education.' Panelists highlighted the urgent need to develop and distribute AI solutions that are locally hosted, low-cost, and accessible even in remote and low-bandwidth environments. Innovations like teacher-in-the-loop AI systems, trained on government-approved curricula, enable educators to efficiently generate and review content while retaining control and ensuring data privacy. The session underscored the importance of supporting India’s ambitious targets for Gross Enrollment Ratio (GR) and employability, especially in rural, tier-2/3, and open university contexts facing infrastructure, device, and linguistic challenges. International perspectives from Kenya and Pacific nations confirmed that frugal AI approaches offer scalable, sustainable pathways for digital inclusion in education globally, particularly in under-resourced contexts. Panelists agreed on the need for local and open-source AI tailored to diverse linguistic and curricular needs, aligned with national policies prioritizing inclusivity, skill development, and democratization of advanced technologies.
- Launch of the 'Compact for Frugal AI for Inclusive Education' with participation from India, Kenya, Mauritius, Fiji, and the Commonwealth of Nations.
- Frugal AI systems developed in India operate offline on local servers, preserving data privacy, reducing costs, and operating on low bandwidth—even 3G connections.
- Teacher-in-the-loop model ensures that educators review and curate AI-generated educational content at every stage for safety and curriculum alignment.
- Frugal AI platforms currently in use in five countries: India, Ghana, Zimbabwe, Kenya, and Nigeria across school, college, and vocational training levels.
- AI models trained on government-approved curricula in subjects ranging from mathematics and biology to fashion technology and lab sciences.
- India’s higher education challenge: over 40% of its 50,000 colleges are in rural and remote locations, often lacking infrastructure and high-end devices.
- Government target: Increase India’s Gross Enrollment Ratio (GR) in higher education to 50% by 2035 (from 28% currently); recent Skill India Report shows employability rose from 33% to 51% in 10 years.
- Open universities, representing over 11% of higher education enrollment, face unique challenges including content duplication and supporting massive, linguistically diverse student bodies (e.g., BAOU Gujarat with 300,000 students).
- Frugal AI supports content reuse as OER (Open Educational Resources), helping reduce duplication, enhance personalization, and accommodate multiple languages.
- International validation: Kenya, Mauritius, and Pacific Island representatives recognize frugal, locally hosted AI as critical for sustainable, inclusive education in rural/low-resource areas.
- AI democratization: Shift from large, centralized models to customizable, open-source, small language models suitable for personal and institutional devices.
Local Voices First: Designing Inclusive AI Data Systems
The session at the India AI Impact Summit 2026 centered on the importance of 'local AI'—the development and implementation of AI systems rooted in local realities, data ecosystems, and languages. Speakers from Civic Data Lab, Paris 21, the Indian Ministry of Statistics, and the Gates Foundation emphasized moving away from generic, top-down AI solutions towards inclusive, co-created approaches that empower communities and address region-specific challenges. Key initiatives highlighted include the Bhashini multilingual AI project and the role of data collaboratives for socioeconomic areas like public health and climate resilience. The session underlined the need for inclusivity by design, local capacity development, trust in AI systems, and the pivotal involvement of national statistical offices in standard-setting and quality assurance. The narrative linked local AI innovations to broader goals of welfare, happiness, and leaving no one behind—echoing the SDGs and India's global digital leadership.
- India hosts the first-ever global AI summit in the Global South, focusing specifically on welfare, happiness, and development.
- Bhashini, a Government of India initiative, is a multilingual AI platform supporting voice-to-text and translation in Indian languages—demonstrated as an example of local AI in action.
- Inclusion by design is stressed: affected communities should be co-creators, not just participants, in AI and data system design.
- Civic Data Lab promotes collaborative, inclusive data and AI platforms, particularly in public finance, gender, climate, and health.
- Key recurring themes: responsible AI, ethical practices, localized solutions, AI skilling, accessibility, and participatory governance.
- Paris 21 highlights three pillars for local AI impact: access to relevant data, capacity development (especially for underserved groups like small farmers), and building trust in AI systems.
- National statistical offices are underscored as critical for quality control, standard-setting, and facilitating multistakeholder data dialogue.
- The framing links local AI efforts to the Sustainable Development Goals (SDGs), emphasizing the need to 'leave no one behind.'
AI in Action: 2,000 Students Build Real-World Apps in 3 Hours | India AI Impact Summit 2026
The India AI Impact Summit 2026, particularly at the Tata Bharat Yuva AI Hackathon, showcased a transformative shift in democratizing artificial intelligence for India's youth. With more than 10,000 students participating from non-technical backgrounds and across regions, the event stressed the power of AI built in Indian languages and cultural contexts. Tata Consultancy Services (TCS) highlighted its pivotal role in leveraging AI for social impact, such as using AI-driven monitoring systems to improve school sanitation, reducing absenteeism among girls, and fostering inclusivity in public education. Through compelling student stories—from building prototypes in Sanskrit and Mathili to developing multilingual AI-based resume builders—the session underscored the accessibility and real-world application of AI, especially through no-code tools that empower creators regardless of technical expertise. The message was clear: the starting line for AI innovation in India is now open to all, ushering in a new era of equitable technological progress.
- Over 10,000 students across India, primarily from non-coding backgrounds, participated in the AI hackathon.
- TCS processes around 65,000 passport applications daily, digitally connects over 160,000 post offices, and facilitates billions in stock exchange transactions, underscoring its nationwide impact.
- TCS deployed AI-driven sanitation monitoring in rural schools, leading to a significant reduction in absenteeism among girl students by ensuring clean and usable toilets.
- A unique AI and cloud-based maintenance system provides real-time alerts for sanitation gaps in schools, enabling immediate corrective action.
- New education infrastructure, especially around hygiene, has been implemented in partnership with state governments and TCS, supported by extensive government funding.
- Students with no prior programming knowledge successfully built and deployed AI applications in their mother tongues—including Sanskrit, Mathili, and Hindi—demonstrating language inclusivity.
- Projects ranged from Sanskrit AI chatbots to multilingual AI-powered resume builders, directly addressing relevant, real-world problems.
- A featured AI tool (IdeaFlow) supports prototyping in nine Indian languages, dramatically lowering the entry barrier for innovation.
- The session emphasized 'No code, no problem' and that 'Mother tongue is your superpower,' re-centering AI creation within local cultural and linguistic realities.
- The event redefined inclusivity in technology, granting equal opportunity for students from diverse academic, socio-economic, and linguistic backgrounds.
Inclusive AI at Scale: How Commercial AI Can Expand Access and Opportunity
The session at the India AI Impact Summit 2026, moderated by Sil Gupta, explored the rapid evolution of India's AI ecosystem over the past two years, emphasizing a shift from pilot projects to mass adoption and real-world impact. Key representatives from Mastercard, Microsoft, and Fractal discussed initiatives and frameworks being deployed to drive the responsible development and scaling of AI across various sectors including financial services, infrastructure, and SMEs. Mastercard highlighted its focus on inclusivity, security by design, and interoperability to make credit access more equitable and secure, while Microsoft emphasized its Secure Future Initiative and open-source Responsible AI toolkit to ensure compliance and safety for widespread AI adoption, particularly among Indian SMEs through a thriving ISV ecosystem. Fractal emphasized the strategic advantage Indian SMEs possess due to less technical debt and urged them to focus AI adoption on solving core business challenges for tangible impact. Collectively, the session underscored India's distinct approach to AI: prioritizing applied, inclusive, and scalable solutions tailored to the needs of its vast and diverse population.
- India has transitioned from lagging in AI to witnessing significant AI advancements and infrastructure development in just two years.
- Government interventions across the AI value chain, from chipset manufacturing to cloud infrastructure and dataset creation, have underpinned this progress.
- Mastercard's approach is anchored in security by design, consent as a model feature, and interoperability, aiming to democratize safe and scalable AI-driven financial decision-making.
- "If it's not inclusive, it won't scale. If it doesn't scale, it doesn't matter"—Mastercard's guiding CEO motto for AI-driven products.
- Mastercard leverages local Indian talent and infrastructure (e.g., their global data science team is half-based in India) to innovate at scale.
- Microsoft’s Secure Future Initiative embeds security across software lifecycles: 'secure by design, secure by default, secure in operations'.
- Microsoft open-sourced its Responsible AI framework and toolkit—including standard-setting and safety dashboards—available to all on GitHub, spearheading responsible AI adoption.
- The Microsoft ISV (Independent Software Vendor) ecosystem is key to localizing AI solutions for millions of Indian SMEs, ensuring diffusion and relevance.
- Fractal underscored SMEs' agility and lower tech debt as critical advantages for rapid and impactful AI adoption.
- AI should be implemented to directly solve high-impact business problems, not for technology’s sake; e.g., reducing new product launch cycles from 90 days to 2 weeks in the fashion industry using AI analytics.
- The panel’s overall theme is a distinctive Indian approach: focusing on business-led, scalable, and mass-benefiting AI applications, contrasting international discussions largely centered on regulation and risk.
Building No-Code AI Applications for Public Services | India AI Impact Summit 2026
The session at the India AI Impact Summit 2026 commenced with a passionate articulation of India's unique position at the cusp of a transformative era driven by artificial intelligence (AI). The speaker outlined India's journey from lagging in previous technological revolutions to pioneering innovations such as Aadhaar and UPI, stressing that AI presents a new opportunity for India to set a global agenda. Key guiding principles were set forth: building 'AI-first' systems with proactive outreach and empathy, empowering frontline workers (the 'missing middle') with AI tools, and ensuring technological sovereignty by developing indigenous, open, and vernacular AI solutions. Emphasis was placed on the ethical, societal, and practical challenges of deploying AI for social upliftment, ensuring accessibility for a digitally diverse population. The hands-on segment introduced participants to AI application building—without coding—demonstrating the compression of the prototyping cycle and the growing inclusivity in AI creation. Participants were encouraged to measure AI not by outsmarting humans, but by its ability to uplift the most marginalized citizens, coining the concept of the 'Anthodya test.' The overarching message urged collaborative, sovereign, and inclusive AI innovation as a foundation for India's future.
- India has previously missed pivotal technological revolutions but set new benchmarks with digital initiatives like Aadhaar and UPI.
- AI is framed not merely as a technological shift but as a civilizational and ethical turning point impacting governance, service delivery, and societal empathy.
- Digital systems fall short in proactively serving India's diverse, low-literacy population; AI-first systems aim to bridge this with vernacular, voice-based, and proactive interfaces.
- Current digital literacy in India stands at approximately 38%, highlighting the need for more accessible AI systems that reduce reliance on intermediaries.
- The 'missing middle'—frontline workers like Asha and Anganwadi workers—will be empowered by AI assistants for diagnosis, advice, and vernacular interaction, strengthening service delivery.
- India is ensuring AI sovereignty with 38,000 GPUs available through the India AI mission, the creation of datasets via 'AI Kosh,' and the emergence of high-benchmark Indic language models.
- Policy recommendations include ensuring data sovereignty, promoting open-weight and indigenous models, and preferring open-source or India-tailored solutions.
- Design priorities specified: prioritize voice and vernacular interfaces, minimize form-based processes, develop proactive systems, and ensure frugal but sovereign applications.
- A hands-on workshop enables participants to build AI applications from ideation to prototype in under 60 minutes, leveraging natural language instead of coding.
- India’s benchmark for AI success should be the 'Anthodya test': uplifting the last person at the last mile, rather than merely passing the Turing test.
Frugal and Quantum-Ready AI for Nations
The session on 'Frugal AI and Quantum Ready Systems Driving Growth, Impact, and the SDGs' at the India AI Impact Summit 2026 brought together a multilateral audience, emphasizing the move from experimental pilots to sustainable AI infrastructure that supports public institutions and aligns with sustainable development goals. Speakers highlighted the necessity for 'frugal AI'—not about doing things cheaply or with less impact, but about making AI systems that are affordable, energy-efficient, resilient, and tailored for long-term public deployment within finite budgets and infrastructures. The summit underscored the unique challenges faced by governments and the critical difference between technological ambition versus practical, institutional decision architecture. The session also addressed the need for quantum readiness to future-proof investments and avoid technological dead ends. The United Nations International Computing Center (UNICC) shared global experiences around AI at scale, noting that success hinges on total cost of ownership, sustainability, contextual design, security, and the ability to scale without costs rising linearly. Ultimately, the summit called for a mindset shift: AI must be evaluated by its real, sustainable impact for citizens, with deliberate attention to scalability, fit-for-purpose design, and alignment with planetary needs.
- The session focused on 'frugal AI'—AI designed for real-world constraints (cost, energy, capacity), emphasizing deployability over sheer technological scale.
- Quantum readiness was highlighted as essential for future-proofing and ensuring today's infrastructure investments do not become obsolete.
- Over 100 countries' representatives attended the summit, showcasing the global relevance and institutional urgency of actionable AI policy.
- UNICC highlighted that most AI pilots globally do not survive beyond early stages due to sustainability and cost challenges.
- Cambridge's 'total cost of ownership' framework for AI was endorsed as a model for governments and multilaterals deploying AI at scale.
- AI systems should be affordable, accountable, contextually designed, secure, and sustainable over time—not just technologically advanced.
- Scaling AI should not increase costs linearly; frugal design from the outset is crucial for population-scale implementations.
- The need to tightly link AI deployments to SDGs (Sustainable Development Goals) and long-term societal progress, not only technical metrics.
- Quantum readiness involves setting standards, cultivating talent, ensuring interoperability, and institutional preparedness.
- A shift is advocated from technology-centric experimentation to institution-ready infrastructure, measured by transparency, resilience, and real citizen impact.
Quality Control in Healthcare AI
The session centered on the critical need for robust quality control and governance of AI applications in Indian healthcare. Dr. Karthik Arappa from the WHO and Vice Admiral Dr. Vive Kandesha discussed the imminent release of India's national AI strategy for health and underscored the complexity of ensuring patient safety, equity, trust, and regulatory compliance in AI adoption within clinical environments. Key issues addressed included the inadequacy of current performance metrics (like accuracy and F1 scores) to capture real-world effectiveness, the risks posed by data bias and model drift, and the necessity of post-deployment monitoring and local validation. The speakers advocated for a shift from lab-based evaluation to outcome-based and context-specific assessments, emphasizing the development of governance policies, inventory management, and stepwise capacity building to ensure that AI enhances healthcare without compromising ethical or safety standards.
- India's national AI strategy for health is being launched, with a forthcoming AI governance model unveiled by the Health Minister.
- There are currently over 600-700 FDA-authorized medical AI devices in the US, but many AI tools remain unregulated.
- Quality control is deemed essential in every phase of the AI lifecycle: design, validation, deployment, post-deployment, and feedback.
- Significant risks include biased or incomplete data, lack of transparency, cybersecurity vulnerabilities, and regulatory gaps.
- Performance metrics (accuracy, F1, ROC) are inadequate for real-world clinical evaluation; focus must shift to true health outcomes.
- Continuous local validation and post-deployment monitoring are vital to detect model drift and ensure patient safety.
- Equity, ethics, privacy, and transparency are central quality dimensions; poor QA can worsen inequities and expose institutions to reputational and regulatory risks.
- Implementation recommendations include AI governance policies, maintaining inventories, standardizing procurement, starting with small/local deployments, and scaling by building local capacity.
- Effective AI adoption must be outcome-driven, contextually validated, and guided by interdisciplinary collaboration among clinicians, technologists, and administrators.
Gyan Bharatam: Where Tradition Meets Technology
The session at the India AI Impact Summit 2026 centered on the transformative role of AI in preserving, decoding, and democratizing access to India's vast manuscript heritage. Speakers outlined the Gan Bharatam mission, which leverages cutting-edge technologies—including AI, machine learning, and handwritten text recognition—to convert ancient, diverse, and often fragile manuscripts into machine-readable, searchable, and translatable resources. This digitization and semantic enrichment aims to rejuvenate Indian knowledge systems, making them accessible and relevant to new generations while enabling advanced scholarly engagement. The initiative is underpinned by multi-stakeholder alliances involving academia, technology companies, cultural custodians, government, and international partners, exemplified by the Gan Setu competition that surfaced innovative AI solutions tailored for Indian script diversity. The session emphasized moving beyond mere preservation to active participation and research, allowing for cross-lingual access, deep analytics, and global dialogue, thereby transitioning India’s knowledge legacy from ‘archives to analytics’ and aligning with the Viksit Bharat 2047 vision.
- AI and allied technologies are being deployed to digitize, decode, and semanticize India's ancient manuscripts, covering knowledge spanning astronomy, mathematics, medicine, linguistics, and more.
- The Gan Bharatam mission aims for not just preservation but also rejuvenation of India’s manuscript heritage, making it accessible and relevant for current and future generations.
- A multi-pronged alliance—spanning academic institutions, AI research labs, technology partners, state agencies, and international repositories—drives this initiative.
- Handwritten manuscript diversity (languages, scripts, regional and chronological script variations) presents unique technical challenges, driving new AI/ML algorithm development beyond western OCR norms.
- The Gan Setu competition (September 2025) identified and onboarded four technology companies with bespoke AI solutions for script recognition, digital repair, and machine-readable formatting.
- Significant progress between September 2025 and February 2026, with alliance partners delivering major advancements in manuscript processing solutions.
- Initiative includes not just linguistic manuscripts but also non-textual content (maps, graphs, diagrams), requiring advanced AI for full semantic extraction.
- Live demonstrations at the summit (Bharat Gan Bharatam pavilion, Hall 14) showcase conversational AI interfaces, multilingual translation, and hands-on exploration of manuscript resources.
- Focus on enabling AI-driven search, comparison with global knowledge repositories, and accessibility in multiple Indian and world languages.
- The project aligns with the Viksit Bharat 2047 vision, ensuring knowledge doesn’t remain an artifact but fuels innovation, ethics, and global intellectual discourse.
Great Powers in the Age of AI and Cognitive Systems
This session at the India AI Impact Summit 2026 delivered a stark analysis of global AI power dynamics, warning that AI diffusion is not simply a question of access but is fundamentally about power, wealth, and control. Drawing parallels between the current technological revolution and historical industrial shifts, the speaker revealed that the US and China are overwhelmingly dominant in foundational AI research and commercialization, with Europe trailing and India characterized as little more than a large market consumer. The presentation underscored the civilizational implications of AI: its profound capability not only to serve as a tool but to change human identity, decision-making, and even neurology, often without users’ awareness. The speakers called for India and Europe to form a 'third axis' in AI, leveraging Europe’s research with India’s scale and talent to create a global counterbalance. Governance, sovereignty over digital infrastructure ('the stack'), and cooperative policy efforts between industry, government, and academia were spotlighted as vital to stemming digital colonialism and securing national and individual agency at a time when traditional concepts of sovereignty are under threat from external technological influence.
- A comprehensive report was launched, identifying 19 core technologies shaping the future; six, including AI, will be the focus of fierce international competition.
- Stark benchmarks show the US leads all five key AI metrics, China is close on three, while the EU is catching up; India lacks foundational research and is mainly a market for imported AI.
- AI is catalyzing not a mere technological shift, but a civilizational transformation, fundamentally altering human identity, behavior, and society at scale.
- Modern AI-driven influence bypasses classical forms of colonialism: power is exerted digitally, subtly eroding individual autonomy without the need for physical presence or armies.
- Neuroscience evidence shows that AI and social media change brain structure and reduce reflective thinking, promoting addiction-like behaviors and susceptibility to influence.
- The session exposed that 'diffusion' of AI is often about global power projection, not shared benefit—those who control digital infrastructure ('the stack') control sovereignty.
- The panel advocated for a strategic India-Europe partnership to establish a ‘third pole’ in global AI, leveraging combined strengths to avoid technological dependency.
- Without foundational R&D and ownership of core digital infrastructure, national and individual sovereignty are at risk in the AI era.
Bringing AI to Rural Communities: Access, Ethics & Governance
The session at the India AI Impact Summit 2026 spotlighted the pivotal role of artificial intelligence in transforming India’s economy and education landscape, emphasizing equitable inclusion, especially in rural and border districts. With AI projected to add $450–500 billion to India's GDP by 2030, the discussion underscored that AI literacy must become foundational infrastructure—on par with electricity and broadband—not just an enrichment opportunity. Panelists stressed tailoring AI education and teacher upskilling to diverse local needs using a people-first, purpose-driven, and pedagogy-led approach. Initiatives like NEP 2020, Digital India, PM Shri, and Samagra Shiksha channel policy momentum and infrastructure, but real impact hinges on integrating local adaptation and robust teacher capacity building. Furthermore, practical implementation faces infrastructural and human resource hurdles in aspirational districts, such as single-teacher schools and lack of maintenance, yet teacher motivation and openness remain high. The panel concluded that contextually relevant, offline-capable, and supportive educational AI tools, combined with capacity-building and ongoing support—rather than fear of replacement—are key to empowering over 9 million teachers and leveraging India’s demographic scale for inclusive digital leadership.
- AI is projected to contribute $450–500 billion to India’s GDP between 2025 and 2030.
- India has over 250 million school-going children and 65% of its population in rural areas; inclusive AI literacy is critical to leverage the demographic dividend.
- AI literacy is advocated as essential economic infrastructure, alongside electricity and broadband.
- The panel stressed that ethical AI is foundational, not just a compliance layer, to build societal trust and stability.
- India has a substantial education workforce with over 9 million teachers, whose upskilling is vital for equitable AI adoption.
- Existing policy frameworks, including NEP 2020 and Digital India, support experiential and computational education; government schemes like PM Shri and Samagra Shiksha provide added pathways for implementation.
- Panelists recommended a three-pronged approach: people-first (local adaptation), purpose-driven learning, and pedagogy-led methods customized for teachers’ real-world needs.
- Real teacher needs are simple and practical (e.g., student engagement or simplifying complex topics), not always aligned with high-level policy language.
- Tools like Microsoft Copilot, preloaded with NEP 2020 learning outcomes, enable teachers to customize lesson plans for regional and cultural relevance.
- In aspirational and border districts, key barriers include basic infrastructure deficiencies, digital maintenance, and predominantly single-teacher schools.
- Teacher enthusiasm and willingness to adopt AI exist, but require reassurance that AI augmentation, not replacement, is the goal.
- Implementation success depends on supporting mental and infrastructural challenges, and developing offline or low-resource digital learning solutions tailored for underserved areas.
Reimagining creativity in the age of AI
The session at the India AI Impact Summit 2026, hosted by journalist Anjana Om Kashyap with acclaimed lyricist Prasoon Joshi, offered a profound exploration of artificial intelligence’s (AI) impact on creativity, culture, and society. Joshi challenged the term 'artificial' in AI, emphasizing that AI is actually powered by real human experiences and emotions, transformed into data. He argued for a broader, context-sensitive view of creativity, extending beyond traditional arts to all forms of human expression and innovation. Joshi celebrated AI’s power as a grand equalizer, democratizing creativity and knowledge by providing widespread access to tools and information. However, he raised concerns about authenticity, deepfakes, and the ethical implications of AI-driven automation, particularly in media. Both agreed that transparency, robust policy, legal frameworks, and ethical education are essential to address AI’s challenges. Importantly, Joshi maintained that while AI can automate and imitate, it cannot yet access the 'unexpressed'—the spiritual or intuitive spark of human originality—thus ensuring humans retain a unique creative edge. The session closed with a call for India to actively shape the global AI narrative through thoughtful discourse and leadership rather than passively following technological currents.
- Prasoon Joshi questioned the term 'artificial' in 'artificial intelligence,' stating that AI is fundamentally built on genuine human data and experiences.
- Creativity, historically and in the age of AI, should be seen broadly: not just in arts but in any innovative human endeavor.
- AI acts as a democratizing force, granting universal access to creative tools and intelligence, breaking down traditional hierarchies.
- AI automation has reached Indian newsrooms, with AI avatars (e.g., Anjana 2.0) now handling voice-overs and content, exemplifying rapid adoption.
- Concerns highlighted regarding deepfakes, misuse, and the need for stricter transparency, ethical standards, and regulatory frameworks.
- Need for education in schools regarding AI, its use, and ethical implications underscored.
- India's philosophical tradition provides a unique, inclusive perspective: not human-centric, but creation-centric, allowing a nuanced AI discourse.
- Joshi stressed the irreplaceable human capacity for intuition, spiritual connection, and expressing the 'unexpressed,' which AI cannot replicate.
- The session underscored India's proactive role in global AI leadership and the importance of shaping, not following, the AI wave.
Pathways to Equitable AI Compute Access
The multistakeholder dialogue, organized by the Oxford Martin AI Governance Initiative in partnership with the Quantum Hub and the Committee of Global Thought at Columbia University, focused on the imperative of equitable access to AI compute infrastructure. Leaders from academia, industry, and policy discussed models such as 'shared compute hubs' as a means for countries, especially in the Global South, to pool resources and gain strategic sovereignty over AI capabilities. Drawing parallels to the early open-source movement, panelists emphasized that compute access now defines who can participate, innovate, and benefit from the AI revolution. They highlighted the urgent need for policy shifts to ensure competition, prevent market concentration, and establish compute as digital public infrastructure. The discussion acknowledged India's rapid progress in AI readiness but stressed persistent gaps in local compute availability, identifying shared hubs and international cooperation as essential steps forward.
- The session was co-hosted by the Oxford Martin AI Governance Initiative, Quantum Hub, and Columbia University's Committee of Global Thought.
- Central topic: 'Shared compute hubs' as a potential solution for equitable AI access, especially for the Global South.
- Panel included voices from Oxford, Mozilla, NASCOM, Microsoft, Tony Blair Institute, Yota Data Services, and the Quantum Hub.
- Historical parallels were drawn with the 1990s open web movement, underscoring the risks of infrastructure concentration and market monopolies.
- A cited statistic: 90% of leading AI scientists originate outside of the US and China, but global talent migration is subject to change due to geopolitical pressures.
- Current AI compute is heavily concentrated, putting innovation and policy leverage mostly in hands of a few countries or corporations.
- National compute strategies are insufficient for most countries due to high costs and lack of bargaining power; pooling resources is proposed as an alternative.
- Shared compute hubs would be governed independently of national politics, modeled as digital public infrastructure.
- India, despite its large market and data availability, only recently saw significant advances in local compute infrastructure.
- Policy reforms and new technical models are urged to secure equitable access, competition, and innovation in AI.
AI in Public Health: Opportunities and Challenges
The session at the India AI Impact Summit 2026 brought together distinguished leaders from healthcare, technology, CSR, and global organizations to discuss actionable strategies for scaling AI innovations in India's health sector. The discussion highlighted AI's transformative potential in diagnostics, disease screening, and surveillance, especially in tier 2 and 3 cities and low-resource settings. Challenges addressed included integrating AI into primary healthcare, the necessity of building user awareness, and aligning innovations with national policies for sustainable impact. International perspectives, notably from the EU, reinforced the need for robust data governance, interoperability, open-source standards, and joint investment mechanisms to ensure sovereignty over health data and effective scaling. Panelists emphasized the need for pragmatic collaboration across public, private, and capital ecosystems, underlining that successful AI integration in healthcare requires thoughtful alignment with local context, regulatory support, capacity building, and sustainable financing.
- AI is seen as the greatest enabler in Indian healthcare to date, particularly in diagnostics, screening, and disease surveillance.
- Significant opportunities exist for AI to improve access, quality, equity, and affordability in India's low-resource health settings.
- Challenges remain in leveraging AI effectively at the primary healthcare level, especially for preventive and promotive health—current tools are insufficient and user awareness is lacking.
- Diagnostic AI is especially impactful in tier 2 and 3 cities where traditional diagnostic resources are limited.
- Successful AI adoption requires integration into existing national programs and policies, unsupported standalone solutions have limited impact.
- International perspectives (notably from the EU) stress that data sovereignty, security, and interoperability are critical, advocating for collective ownership of health data and open-source approaches.
- India and the EU share complementary challenges: India’s scale and population versus Europe’s shortage of healthcare professionals; both can benefit from collaborative pilots and shared investment.
- Recent regulatory advances in the EU (like EMA recognizing AI for clinical trials) were cited as key enablers for innovation.
- Financing, particularly through joint ventures between Indian and European VCs and funds, is necessary to move AI solutions from pilots to scale.
- The panel advocated for a holistic ecosystem approach—combining policy support, investment, technology, capacity building, and cross-sector collaboration for AI-driven health system transformation.
AI-DPI Geopolitics: Strategy for the Global Majority
The session at the India AI Impact Summit 2026 focused on the integration of Digital Public Infrastructure (DPI) and Artificial Intelligence (AI) in the Global South, particularly emphasizing India's leadership in open digital frameworks. Panelists highlighted the complementary but fundamentally different development philosophies of DPI—centered on openness, interoperability, and public benefit—and AI—driven by proprietary models and commercial interests, mainly led by US and Chinese companies. The core discussion revolved around finding pathways to harmonize these technologies for the benefit of citizens, with strong emphasis on user-centric and use-case-driven development. Bottlenecks identified include misaligned economic incentives, fragmented value chains, and the challenge of maintaining sovereignty and inclusivity. The need for regional and international cooperation, rather than mere replication of US/China models, was underscored, with calls for joint initiatives, cross-border talent development, and increased state-driven investment. The session concluded that merging DPI and AI must prioritize citizen needs, inclusivity, and preserve public-interest principles at a time of increasing geopolitical rivalry and technological lock-in.
- India is highlighted as a global leader in building open, interoperable Digital Public Infrastructure (DPI) such as digital IDs, payments, and data exchange systems.
- AI innovation is characterized by proprietary models and dominance by US and Chinese tech giants, often focused on closed architectures and commercial returns.
- DPI and AI share common end goals—improving service delivery and citizen well-being—but diverge in principles, with DPI centered on openness and AI on competition.
- Successful integration demands governance mechanisms, safeguards, and a renewed focus on public interest, sovereignty, and reducing new dependencies.
- Primary bottlenecks include: (1) competition driven more by economic logic than user needs, (2) fragmented or incomplete value chains, (3) and lack of holistic policies addressing AI-DPI convergence.
- Panelists advocate for use-case-driven deployment and user-centered design to ensure real-world impact, especially in sectors like healthcare, education, and crisis management.
- A 'third way' for the Global South is proposed—not as technological autarky, but as a model leveraging existing open solutions (e.g., India AI Mission, LatAm GPT) and fostering cross-regional cooperation.
- Strategic government investment and regional collaboration agreements are deemed crucial for localized capacity building and reducing dependency on foreign technology providers.
- International cooperation in AI should shift towards a democratized, needs-driven model, fostering collaboration between like-minded developing countries for talent development and solution co-creation.
From Policy to Practice: Governing AI for Global Impact
The panel session, organized by NASCOM in collaboration with the Future of Privacy Forum (FPF), focused on the complex and evolving landscape of AI governance, specifically the tension between policy intent and practical implementation. Karina Prunki, Senior Research Fellow at Oxford and lead author of a comprehensive AI safety report, highlighted the unique risks associated with open-weight AI models, stressing the importance of careful pre-release governance, risk assessment, and ecosystem monitoring due to the difficulty of controlling these models once released. She provided concrete approaches such as capabilities assessment, misuse potential evaluation, and staged release strategies. Jules from FPF brought a cross-jurisdictional perspective, describing the organizational chaos and adaptive structures as companies and regulators scramble to define roles, implement impact assessments, and scale AI governance tools. Both speakers emphasized the need for shared responsibility, the challenge of dual-use risks, and the disruptions caused by rapid innovation outpacing governance frameworks. While some technical and procedural advances exist, the field faces persistent measurement, incentive, and capacity challenges, highlighting the urgency of developing scalable and coordinated governance mechanisms for responsible AI deployment.
- NASCOM and FPF co-organized a panel on translating AI policy to governance practice, stressing shared responsibility across regulators, industry, and users.
- Karina Prunki presented findings from a new Oxford-led 173-page AI safety report (also available in a 20-page policymaker summary), emphasizing evidence-based governance for open-weight AI models.
- Open-weight models, once released, cannot be retracted and present heightened risks, including misuse for generating harmful content (e.g., deepfakes, explicit material).
- Best practices for open-weight model governance include staged releases with restricted research access, strong pre-release testing, capability and misuse potential analysis, and ongoing ecosystem monitoring.
- Technical approaches such as data curation and machine unlearning to mitigate risks are in early stages of development.
- Organizations are struggling to designate clear governance responsibility, with privacy officers often tasked with AI oversight but lacking comprehensive scope or resources.
- Businesses are rapidly scaling up their own AI governance processes, implementing impact assessments to comply with global regulations, yet considerable chaos remains in harmonizing these disparate efforts.
- AI governance tools are improving, as organizations train models based on past reviews, but challenges in accuracy and measurement persist.
- Intense competitive and deployment pressures often undermine governance intent, highlighting the necessity for coordinated, cross-stakeholder solutions.
- The panel underscored that dual-use risks and the challenge of aligning rapid innovation with slow-moving governance structures remain central to the current AI policy landscape.
AI for Secure India: Countering Cybercrime and Deepfakes
The session at the India AI Impact Summit 2026 focused on the rapidly evolving intersection of artificial intelligence and cybercrime in India. Panelists highlighted the transformative impact of AI on the digital economy and governance, noting considerable increases in both technological adoption and cyber threats. Key topics included the proliferation of deepfakes, synthetic media, and AI-enabled scams that have become sophisticated enough to evade traditional law enforcement approaches. The discussion recognized the limitations of current law enforcement capabilities, especially at the ground level due to resource shortages and lack of technical infrastructure. Government representatives outlined recent policy and legal frameworks aimed at deterrence and accountability, such as updates to the IT Rules 2021. Education sector leaders stressed the gap between theoretical legal advancements and practical enforcement, emphasizing the urgent need for widespread public awareness campaigns and better coordination among government, law enforcement, and educators. Case studies illustrated the profound risks for citizens across all demographics, including scam compounds at international borders and AI voice fraud targeting senior citizens. The session concluded that a multisectoral, ground-up approach is essential to counteract evolving AI-driven threats and to ensure a secure, inclusive digital future.
- India's IT sector now contributes over 28% of a $1 trillion global economic impact, with close to 20% of India's GDP coming from this sector.
- UPI and digital payments have transformed daily transactions for all societal levels, but have also created new cyberattack vectors.
- AI-enabled cybercrime is on the rise, with over 22,000 crore INR lost to cyber fraud in 2024 alone and more than 2.6 million cases reported.
- Cybercriminal operations are now internationally interconnected, operating from scam compounds at the borders and using AI-generated deepfakes for sophisticated financial theft and extortion.
- Government policies, like the updated IT Rules 2021, are being rolled out to enhance deterrence, detection, and platform accountability.
- Ground-level enforcement is lagging due to lack of technical infrastructure and international cooperation within law enforcement agencies.
- Education and awareness at all societal levels, especially outside tier-1 cities, are seen as essential for prevention and mitigation.
- Recent high-profile scam cases include AI deepfakes impersonating relatives and voice cloning, affecting both the urban educated and vulnerable senior citizens.
- Panelists called for stronger multi-stakeholder collaboration among regulators, police, educators, and industry, as well as improved real-time detection and response mechanisms using AI.
AI in Schools: Protecting Learners and Empowering Educators
This session at the India AI Impact Summit 2026 delved deeply into the integration of AI within education, examining the ensuing challenges and opportunities, especially regarding online safety for children and youth. Panelists from diverse backgrounds—including global policy, online safety, youth research, and international development—discussed the evolving landscape of AI-powered technologies in schools and children's daily lives. They highlighted the shift from voluntary commitments to binding regulations for child safety, referencing global moves such as age restrictions, system-level controls, and content classification. The conversation centered on the need for shared accountability among governments, platforms, families, and educators, with a call to more meaningfully integrate youth voices in policy development. Notable international developments were discussed, including India's decisive regulatory response to online harms and emerging coalitions in Europe proposing bans on harmful platform features. The panel emphasized the importance of bridging the digital divide before broad AI deployment and advocated for AI policy to be responsive to developmental needs and rapidly changing technological contexts. Across these discussions, panelists agreed on the necessity of both practical measures and innovative regulatory approaches to ensure child well-being and safety in an AI-mediated world.
- AI integration into school curriculums is accelerating, raising new online safety concerns for children.
- Policymakers are shifting from voluntary child online safety commitments to binding obligations and duties of care for platforms.
- Recent Indian regulatory action targeting online obscenity was highlighted as an example of common-sense, culturally-grounded, and publicly supported intervention.
- European countries, led by Spain and mirroring Australia's lead, are pursuing social media bans for minors under 16 and proposing restrictions on addictive features like infinite scrolling.
- Youth voices are being increasingly recognized as essential in shaping technology policies related to their welfare, with international initiatives to enshrine their participation.
- Still, over 30% of the global population lacks internet access, necessitating equitable digital infrastructure before universal AI adoption.
- The burden of online safety currently falls on parents, educators, and children, but there is a growing call to shift greater accountability onto tech companies.
- There are concerns that outright bans on social media for minors may drive harmful behavior underground without improving safety, highlighting the need for nuanced, context-sensitive measures.
- The panel underscored the complexity of AI's impact—the 'ambient AI media environment'—and called for new vocabulary and frameworks to address its influence on youth development.
- There is consensus on the urgent need for practical, adaptable, and enforceable regulatory frameworks to keep pace with advances in AI and protect children effectively.
Launch of the AI Evidence Playbook: From Policy to Practice | India AI Impact Summit 2026
The session at the India AI Impact Summit 2026 focused on the unveiling of JPAL’s new 'AI Evidence Playbook'—a practical guide for policymakers, practitioners, and funders on designing, implementing, and evaluating AI-enabled programs. The panel, featuring senior representatives from JPAL, Google.org, and the Indian government's technology mission, lauded India’s efforts in democratizing AI adoption, highlighted successful grassroots data initiatives, and discussed the transformative potential and challenges of AI in public infrastructure, particularly for underserved communities. Key themes included the expansion of digital public infrastructure (DPI) through AI, scalable and contextually relevant AI applications (like voice-based services for rural farmers and AI-powered healthcare), the importance of designing solutions with end-user realities in mind, the need to reskill the workforce amid automation, and the global application of Indian innovations. Real-world lessons were emphasized, including approaches for scaling from pilots to large deployments, and the need for sustainable, user-centric design to maximize impact.
- JPAL launched the 'AI Evidence Playbook', a regularly updated resource to guide policymakers and implementers in AI program design, deployment, and evaluation.
- The playbook consolidates evidence from randomized evaluations and decades of behavioral research in technology for good, and is co-authored by JPAL experts.
- Google.org is the anchor funder for JPAL’s AI Evidence Initiative, with its head, Maggie Johnson, highlighting innovative data-for-AI models that generate income for rural communities.
- Abhishek Singh (India AI Mission) described India's approach as 'DPI raised to the power of AI', leveraging AI to make digital services voice-accessible and linguistically inclusive.
- Concrete examples were shared, such as rural Indians creating local language datasets for global AI labs, earning royalties above minimum wage, and scaling the model internationally.
- 'Green Light', an AI-powered traffic system, is reducing city emissions by 10% and full stops by 30% in pilot Indian cities.
- Success factors for scalable AI applications include solving real-world problems, end-user-centered design (catering to low-cost devices and low connectivity environments), and sustainability.
- Risks highlighted included workforce disruption in low-value tech jobs and the critical need for large-scale skilling and reskilling programs.
- The panel commended the Indian government’s commitment to democratizing the AI conversation through inclusive, large-scale engagement across the country.
- AI's expansion in India is seen as a model for other emerging economies, especially in leveraging digital public infrastructure to deliver services at scale.
One Billion Futures: AI and Education Equity in the Global South
This session at the India AI Impact Summit 2026 brought leading voices from education technology, including representation from the Global South, to challenge the status quo of technological innovation in education. Speakers highlighted the urgent need to view AI not merely as an innovation tool but as a potential equalizer capable of fostering educational equity among the predominantly young populations of the Global South, particularly India. Stark statistics were presented on access and literacy challenges—such as 270 million children out of school and 70% of children under 10 unable to read at grade level in the region—along with underinvestment in education. The discussion emphasized how past waves of educational technology have sometimes deepened divides, but the new capabilities of AI, notably multimodality and real-time knowledge tracing, offer opportunities to meet learners and teachers where they are. Panelists described approaches to AI literacy, insisting that foundational skills cannot be leapfrogged and must be addressed contextually based on local needs. They called for an ecosystem approach involving governments, communities, and industry—a conscious, intentional design of AI—so it can restore agency and dignity, especially for marginalized voices, instead of perpetuating existing inequities. The session signaled a rallying call for AI to be a unifying force for transformative change in education rather than a wedge that widens gaps.
- AI in education should be seen as essential infrastructure, not an optional enhancement.
- Four out of five people live in the Global South, with over 90% of the world's young people residing there.
- Over 270 million children in the Global South are out of school; 70% of those under age 10 cannot read grade-level text.
- Education expenditure in the Global South averages less than $55 (5,500 INR) per child per year.
- Digital adoption in recent decades has often widened, not reduced, inequities in education.
- Emerging AI solutions (e.g., multimodal content, real-time knowledge tracing) offer potential to personalize education and reach remote learners.
- AI adoption risks deepening divides due to massive asymmetries between technology creators (a few hundred thousand) and billions of end users.
- AI literacy is not a one-size-fits-all curriculum but should be viewed as a ladder, tailored to each child's context and foundational level.
- Voices from organizations like CK12 and code.org have reached over 100 million learners globally with free, AI-enabled courseware.
- Case examples show AI can empower marginalized populations—such as women in remote areas—when thoughtfully designed.
- Collaboration across government, educators, community, and tech industry is needed to define objectives, provide infrastructure, and set guardrails for AI in education.
- Call to action: AI must be intentionally and ethically designed to restore agency and dignity, and bring impact and technology sectors together as a unified force.
AI in Financial services - From Innovation to Impact
This session at the India AI Impact Summit 2026 featured two groundbreaking presentations leveraging AI for environmental resilience. Biomakers showcased their advanced AI-driven soil intelligence platform that evaluates soil health using microbiome DNA analysis, enabling actionable, science-backed recommendations to Indian farmers and stakeholders. The company's global approach is backed by a decade of validated research and recently expanded into India through a partnership with Agrael, allowing localized deployment and further soil sustainability improvements. Iterative feedback from farmers has led to continual digital platform enhancements for user-friendly insights. The second team, Resilience AI, introduced 'Resilience 360', an AI-powered climate risk management tool addressing natural disasters across sectors. Their single-window solution provides hyperlocal risk predictions for six disaster types within 30 minutes, using proprietary visual and scientific language models. The tool supports both preventative and recovery initiatives, has secured recognition from India’s National Disaster Management Authority, and is commercially deployed in multiple countries, including in India with significant municipal and enterprise sector traction. The teams highlighted the platforms’ evolution based on stakeholder feedback, validated performance, strong partnerships, and ambitious scaling strategies.
- Biomakers’ AI platform analyzes soil microbiome DNA to provide functional and ecological predictions for soil health.
- The solution integrates microbiological data, environmental variables, and agronomic trials into a unique soil intelligence platform.
- Validated over 10 years, resulting in scientific publications and granted patents.
- Three-tiered commercial offering: per test, per trial, and per-acre pricing models.
- Global reach with a recent partnership with Agrael (India) bringing the technology to local labs.
- Reported 20% reduction in agrochemical fertilizer use as an impact metric.
- Continuous platform evolution driven by direct farmer and stakeholder feedback, focusing on actionable insights and simplified user interfaces.
- Resilience AI’s 'Resilience 360' provides building- and site-level risk assessments for six disaster types with 96% confidence within 30 minutes.
- Combines custom visual language models, geological, climatic, and infrastructural data covering 8.9 million structures in model training.
- Officially recognized by Indian national agencies and deployed with the United Nations in 84 Indian villages and in commercial sectors.
- Operating in India, North America, and the Philippines, targeting a $50 billion operational market.
- Secured copyright and patent filings, with demonstrated commercial traction and intentions to break even next year.
- Stakeholder and user feedback loops informed both model development and product features for greater relevance and usability.
AI in Recruitment: Evidence, Skill Matching & Workforce Productivity | India AI Impact Summit 2026
In this session, Professor David, co-chair of JPAL's AI Evidence Pie Initiative, addressed the critical organizational decision of whether to automate or augment human roles with AI, especially in sectors like healthcare and education. Drawing from his AI-driven teacher hiring study in Ghana and policy simulations relevant to India's health sector, he emphasized the current lack of robust evidence to inform automation-vs-augmentation choices. Through interactive polling, David guided the audience to introspect on their own workplace needs and simulate high-stakes policy decisions, highlighting that both cost savings and human impact must be evaluated. He stressed that, while automation offers potentially greater cost savings, it may come with significant downsides in outcomes, and that rapid technological development further complicates these decisions. The overarching message underscored the urgent necessity of evidence-based policymaking and real-world experimentation to responsibly navigate the spread of AI in organizational and societal contexts.
- Professor David is co-chair of JPAL's AI Evidence Pie Initiative and recipient of the Uroho Yansen Award.
- He shared findings from a Ghanaian study on using generative AI in teacher hiring.
- Posed the central question: Should organizations automate tasks entirely or use AI to augment human decisions?
- Engaged the audience in real-time polling and discussion exercises around automation vs. augmentation in policy scenarios.
- In the health sector case simulation, automation offered a 20% cost saving, augmentation a 5% saving, but automation risked up to 30% worse patient health outcomes.
- Emphasized that most organizations lack solid evidence to base their AI policy decisions on; evidence-based policymaking is critical.
- Technology costs are non-trivial and may not produce net savings; design choices and evolving AI capabilities further complicate decisions.
- Concluded that, while researchers can help quantify impacts, organizations must weigh values, goals, and context-specific factors.
Women at the Frontline of AI in Community Health
The session at the India AI Impact Summit 2026 highlighted the critical role of AI-driven solutions in addressing societal challenges at a population scale, with a particular focus on healthcare delivery and inclusive access to digital public goods. Panelists emphasized that technology adoption and impact depend not only on innovative solutions but also on trust, empowerment, and enablement of frontline workers. Real-world examples were provided, such as the successful implementation of immunization programs at the district level using data-driven approaches and the massive scale of Aadhaar authentication processes handled securely with AI enhancements. Nonprofits like Arman and Kushi Baby showcased AI-powered tools to assist community health workers, including multilingual, multimodal chatbots and integrated platforms for real-time data monitoring and support. The discussions underscored a human-centered approach to deploying AI, where technology augments—not replaces—human judgment, with safety nets like human-in-the-loop systems ensuring responsible, context-sensitive implementations. Overall, the session called for intentional, inclusive AI design and deployment to ensure equitable health and identity service delivery across India's vast and diverse population.
- AI solutions are being designed as digital public goods, targeted at solving societal problems at population scale.
- Adoption of technology relies heavily on trust, empowerment, and enablement of frontline and last-mile workers, not just training or system integration.
- Aadhaar handles over 800 million (8 crore) biometric authentications per day; AI is continuously used to improve match rates, reduce false positives, and strengthen fraud analytics.
- Population-scale health programs, such as in Tripura, increased immunization rates from 60% to 100% through data-driven systems and early adoption of digital tracking (MCTS).
- Arman has deployed an LLM-powered chatbot for Auxiliary Nurse Midwives (ANMs), accessible via WhatsApp, supporting multilingual voice/text queries about high-risk pregnancies; it serves nearly 6,000 ANMs across three states, impacting 700,000 to 800,000 pregnant women.
- Kushi Baby has developed a Community Health Integrated Platform for tracking activities of community health workers and real-time dashboarding for health officials; their proprietary LLM, Sambadi, provides instant insights from health data.
- All major AI implementations discussed include a human-in-the-loop approach, reinforcing oversight, safety, and context-aware decision-making.
- There is a strong emphasis on designing AI with community needs, inclusivity, and gender considerations, and involving end users in both solution design and scaling.
- Technology's true success is measured by its ability to reach and benefit the last mile beneficiaries rather than just technical rollout or trainings.
"AI & Deep Tech in India: Capital, Innovation & Ecosystem Growth | India AI Impact Summit 2026"
The opening session of the India AI Impact Summit 2026 featured prominent investors, legal experts, and association leaders sharing insights on the rapid growth and current trajectory of India's deep tech ecosystem. Representatives from Gaja Capital, 314 Capital, Nishid Desai Associates, Chirat Ventures, and the Indian Deep Tech Investors Association (IDTA) highlighted major recent investments, discussed capital formation challenges, and detailed how new government initiatives—especially the 1 lakh crore (~$11 billion) RDIF program—are catalyzing public-private partnerships to close market gaps and accelerate commercialization of deep tech research. The panelists emphasized India's recent surge in late-stage and AI investments, the pivotal role of academic institutions as innovation engines, and the growing maturation of the domestic VC ecosystem. However, they stressed that India still lags China and the US in total deep tech investment, and significant effort will be required to mobilize sufficient domestic and global capital, create success stories, and sustain a long-term growth cycle.
- Deep tech investments in India since 2015 have reached over 1,200 companies and $28 billion, with $12 billion focused on AI.
- 2025 alone saw $5 billion invested in deep tech and $2.5 billion in AI.
- About 70% of these investments were in growth and late-stage companies.
- The Indian government launched the RDIF (Research and Development Investment Fund), a 1 lakh crore (approx. $11 billion) initiative to catalyze deep tech capital formation, with a policy requiring 50% private co-investment.
- Public sector programs such as the National Semiconductor Mission, Anusandhan Foundation, and the quantum mission aim to address market failures by providing both profit-seeking and grant-based funding.
- The Indian deep tech ecosystem is maturing, with increased LP (limited partner) appetite for longer-gestation, high-risk investments, improved commercialization pathways from research to market, and stronger startup and VC support structures.
- Annual deep tech investment (
$2 billion) is still well below China ($80–100 billion) and the US ($150 billion), highlighting a significant growth gap. - Mobilizing $22 billion in private capital over the next cycle is critical, requiring new fundraising structures, more success stories, and greater participation by public sector entities.
- Academic institutions are seen as key innovation engines, and their stronger integration with investors and government is a strategic focus.
- India lacks global poster-child successes in deep tech innovation, making it harder to attract international capital, but current initiatives are viewed as crucial catalysts.
Trustworthy AI Investment as Governance
The session at the India AI Impact Summit 2026 addressed the intersection of AI governance and investment, emphasizing that capital flows are a decisive factor in shaping the direction, beneficiaries, and safety of AI development. Moderated by Fahala, CEO of SIMA, and Sophie, the discussion moved beyond traditional visions of governance—such as ethical guidelines and regulatory frameworks—to examine how investment patterns set the real boundaries and priorities for trustworthy and inclusive AI. Panelists included leaders from venture capital, government, and standards bodies, who discussed how most current AI investment is concentrated among a handful of frontier model developers, with insufficient funds going toward safety tools, open-source initiatives, standards, and capacity-building in emerging economies. The lack of alignment between governance aspirations and capital allocation was highlighted as a gap that risks deepening global inequities and undermining the growth of responsible AI. Notably, Singapore's strategic approach to public investment in startups and community-building was presented as a model for fostering broad-based innovation and governance capacity, with further calls to develop international alliances and standards that democratize AI benefits.
- Session focused on bridging the disconnect between AI governance (often seen as boring) and investment (seen as exciting).
- Highlighted that investment, not just regulation, determines which AI systems are developed, who benefits, and how issues like safety and inclusion are prioritized.
- Panel included leaders from Modilla Ventures, IMDA Singapore, e-standards, SCAI, and UNESCO.
- AI investment has exploded over the past six years, now reaching 6-, 7-, and even 8-digit sums—but the lion's share goes to a handful of frontier model developers.
- Panelists pointed out only a minor fraction of AI investment funds go toward trust, safety, open source tools, or infrastructure that fosters inclusive innovation.
- Competitive pressures, especially geopolitics and commercial rivalry (e.g., US-China, rush for sovereign AI), shape current capital flows—often at the expense of safety and inclusive access.
- Standards and public utility models were presented as essential tools to allow for greater ecosystem participation and capacity building.
- Singapore has taken proactive steps: directing public funds, grants, tax benefits, and creating physical innovation spaces (like Laurong AI) to support startups in diverse sectors.
- Global asymmetries persist, with the majority (Global North and major commercial actors) controlling most capital, while the 'global majority' remains under-resourced.
- Calls for new investment data, international alliances, and public-private ventures to bridge funding gaps in safety, foundational research, and innovation for emerging economies.
AI, Labor & Inclusive Growth: Policy Pathways for the Global South | India AI Impact Summit 2026
This session at the India AI Impact Summit 2026 explored global and Indian experiences in leveraging AI and digital infrastructure to improve social welfare, particularly focusing on cash transfer programs, digital public infrastructure, and job creation in the age of AI. Panelists shared transformational case studies from Togo's rapid rollout of digital cash transfers using AI mapping and biometrics, the importance of trust and a human-centered approach in India's digital services for rural and marginalized communities, and evidence from GiveDirectly on unconditional cash transfers' positive impact. The session also presented findings from a major report on India's AI-driven job market, emphasizing both automation risks and substantial opportunities for new roles. The discussion highlighted the need for adaptive, equitable policy design—one that uses AI ethically to expand opportunity, enhance resilience, and ensure that economic gains reach the most vulnerable populations.
- Togo rapidly designed and deployed a fully digital cash transfer program using biometric voter IDs and AI, reaching 920,000 people—25% of its adult population—during the pandemic.
- Machine learning algorithms prioritized aid recipients by mapping poverty via satellite imagery and analyzing telecom data, increasing aid effectiveness by 15%.
- A new social protection information system is being established in Togo, integrating biometrics, dynamic social registries, and digital payments, with funding from the World Bank.
- India’s Digital Empowerment Foundation operates in 3,000 locations, emphasizing the importance of building trust, enabling two-way economic flows, and designing frugal, inclusive digital public infrastructure.
- Key lessons for digital policy: Trust-building (human intermediaries), two-way engagement with rural populations, and bottom-up design are critical for equitable impact.
- GiveDirectly has delivered ~$1 billion (nearly ₹10,000 crore) in cash transfers globally, advocating unconditional transfers as an effective, dignified, and transformative poverty-alleviation tool, particularly relevant as AI may increase wealth concentration.
- Lump-sum digital cash transfers empower individuals to adapt to economic shocks, start new businesses, and increase resilience, but must be contextually and carefully designed.
- India's UPI and payment infrastructure create an opportunity for dignified, efficient last-mile delivery of aid.
- A BCG-NITI Aayog report finds that while AI will disrupt jobs (notably junior tech roles), it is also projected to create significantly more jobs—up to 170 million globally compared to 60-70 million displaced (World Economic Forum data).
- Three key categories of new AI jobs are emerging: enterprise AI roles (AI ops, product managers, prompt engineers), frontier AI roles (AI-cybersecurity, AI-quantum technology), and foundational AI research roles.
Beyond GPUs: The Future of AI Compute
This panel discussion at the India AI Impact Summit 2026 focused on expanding the discourse on AI compute beyond the dominant narrative around GPUs, emphasizing the importance of a holistic, system-wide approach to AI hardware and infrastructure development. Panelists from leading organizations—including ITIC, Hewlett Packard Enterprise, Marvell Technologies, Google, Qualcomm, and Sienna—explored topics such as trust in global AI supply chains, the significance of AI-native architecture, the critical roles of networking and memory, the growing impact of edge AI in the Global South, and the relevance of software-driven automation in AI compute. The session highlighted major new investments (notably Google’s $15B AI hub in Visakhapatnam), advances in on-device AI, and innovative use cases for hybrid and decentralized AI architectures. The conversation underscored the crucial need for international collaboration, robust infrastructure planning, and the development of devices and networks to democratize AI's benefits, particularly for emerging markets like India.
- Deliberate shift of focus from GPU-centric AI to broader AI-native system infrastructure—including networks, memory, data security, and modular computing.
- Emphasis on global trust as essential for AI hardware and data ecosystems, requiring cross-border cooperation between governments and the private sector.
- Industry trend away from retrofitting traditional IT infrastructure toward purpose-built, AI-native systems enabling scalable, modular, and efficient deployment.
- Networking and memory are cited as the current bottlenecks to compute efficiency, with advances such as optical interconnects improving internal data center speeds.
- Google announced a $15B investment in India's first AI hub in Visakhapatnam, with a gigawatt data center, subsea cable connectivity, and renewable energy sourcing.
- Introduction of Android AI Core, giving local device users access to Gemini Nano (on-device LLM) for privacy-by-design AI services that do not rely on the cloud.
- Qualcomm highlighted real-world edge AI applications, including Roxa Health's on-device healthcare analytics app for rural clinics, demonstrating hybrid AI's value in markets with intermittent connectivity.
- The session reinforced the role of software, automation, and network optimization in realizing the full potential of AI compute.
- International collaboration and infrastructure leadership are repeatedly positioned as opportunities for India to shape the next phase of global AI.
Immersive AI Training: Personalised Learning at Scale
This session of the India AI Impact Summit 2026 showcased a panel of distinguished leaders from healthcare, finance, and edtech, centered on the transformative potential of AI-powered immersive learning for inclusive human capital development in India. Thomas Kaidi of ATEN India and USA recounted the evolution and scaling of immersive AI and serious games over 15+ years, highlighting empirical evidence that simulation-based and experiential learning dramatically outperforms traditional pedagogies. Ajit Sundar corroborated this with pioneering deployment stories in JP Morgan and Wells Fargo, demonstrating both early skepticism and later large-scale successes as training times and costs plummeted while efficacy soared. The session underlined how 'agentic AI' and AI-driven virtual mentors can personalize feedback and assessment for thousands simultaneously, offering language flexibility and individualized learning paths. The panel underscored India's urgent need for such scalable reskilling to compete globally, especially in sectors like semiconductors and advanced manufacturing. Practical impact was backed by robust metrics, arguing for AI-powered learning as a catalyst for productivity, retention, and equitable talent development.
- ATEN recognized in 2025 as top AI virtual learning tools provider, specializing in serious games for education.
- Immersive AI and simulation-based learning increases retention rates up to 90%, compared to 10% for reading and 30% for video.
- Agentic AI-powered platforms have been deployed for global organizations since 2009, including JP Morgan and Wells Fargo.
- A 16-year AI learning journey reduced training programs from 12 weeks to 5 weeks—a 58% improvement in speed.
- Reported benefits for enterprises: 30% productivity improvement, 21% reduced handling time, 50% increase in training throughput, and 81% decrease in attrition.
- AI-driven virtual mentors provide adaptive, language-inclusive, and scalable upskilling for thousands in parallel.
- AI immersive learning platforms enable predictive analytics, tailored assessments, and track workforce strengths and weaknesses from onboarding to exit.
- India’s transition to AI-powered upskilling is essential for competitiveness in critical sectors like semiconductors and robotics.
Accessible, Affordable, Accountable AI for Healthcare
The session at the India AI Impact Summit 2026 delved deeply into the evolving adoption and impact of AI-driven predictive tools in healthcare, their regulatory challenges, and practical implementation, particularly in India and other emerging markets. Panelists explored contrasting regulatory models from the US, EU, UK, and India, highlighted the balance between AI’s clinical utility and ethical risks such as false positives and patient anxiety, and discussed how the normalization of predictive AI in everyday healthcare can improve early intervention. From a clinician’s perspective, AI’s potential is both underestimated (given its reach and triage abilities at scale) and overestimated (with current prediction models still prone to significant uncertainty and risk of unnecessary procedures). From an investment standpoint, the panel cited a notable surge in AI-driven healthcare funding, emphasizing India's unique opportunity to lead with low-cost, domain-specific small language models and devices that can be globally competitive, especially in the Global South. Technological advancements over the last five years, especially in data interoperability and generative AI, are rapidly transforming not only diagnostics but the broader healthcare ecosystem, despite ongoing implementation and scalability challenges.
- Three main regulatory models (EU, UK, US/India) are shaping the approval and oversight of predictive AI healthcare tools.
- Widespread adoption of advanced predictive AI capabilities is anticipated to become the standard of practice within years.
- Clinicians experience both overestimation (AI’s purported ability to predict disease years ahead, but with many false positives) and underestimation (AI’s transformative population-level triage potential) regarding AI impact.
- India faces unique breast cancer screening challenges, with AI offering scalable triage solutions where radiologists are scarce.
- Establishing clear guardrails and uncertainty categories (e.g., thresholds for actionable probabilities) is critical to safe AI deployment.
- Major investment momentum in health tech: US healthcare investment rose from $10B (2023/24) to $15B (2025), with 60% related to AI.
- Indian startups are focusing on SLMs (Small Language Models) and affordable AI-powered diagnostic devices aimed at emerging markets, in contrast to the West's reliance on large LLMs and expensive infrastructure.
- Opportunities exist for India to lead in point-of-care AI devices targeted at the Global South, potentially capturing large market share due to lower capex requirements.
- Recent AI advancements have shifted healthcare analytics from reactive, siloed, retrospective models to proactive, interoperable, and generative approaches over the past five years.
Open-Source Tools for Safe and Secure AI
The panel at the India AI Impact Summit 2026 focused on the role of open-source tools in enhancing trust, safety, and inclusivity in artificial intelligence. Speakers representing diverse backgrounds—including major organizations, model developers, academia, and national policymakers—emphasized that open-source tools are critical for democratizing AI trust, especially by enabling transparent evaluation and monitoring of AI systems. Key initiatives discussed included the OECD's catalog of over 900 tools for trustworthy AI, the UK's Inspect platform for self-certifying AI safety, and Mozilla/Roost's open-source libraries for trust and safety protections. The panel highlighted that open source empowers smaller players and underrepresented regions, offers solutions aligned with engineering realities, facilitates multistakeholder participation, and can counterbalance the centralization of AI capabilities by large, mostly American companies. However, significant gaps remain in open-source tooling—especially in auditing pre-training data and achieving sustainable open-source development given fast-paced advances and commercial interests. The panel called for continued collaboration, targeted submissions to relevant catalogs, and the engagement of local communities for sustainable, trusted AI ecosystems.
- OECD has built an online catalog of over 900 tools and metrics for trustworthy AI, accessible at ocd.ai/catalog.
- A new targeted call for open-source tool submissions has been launched, aimed at enriching the OECD catalog and informing India's 'Trusted AI' working group deliverables.
- The UK's AI Safety Institute has developed 'Inspect,' an open-source platform for self-certifying AI outputs, exemplifying how tools (rather than regulatory documents) can engage engineers in safety practices.
- Mozilla Foundation and Roost have released open-source libraries (e.g., 'Any Guardrail' and 'Osprey') to simplify trust and safety deployments for developers, including small startups that cannot afford proprietary solutions.
- Roost's tooling enables content moderation and enforcement at scale: Blue Sky uses it for 45 million events/day and 100,000 daily content takedowns.
- Open-source models and tools help nations and citizens maintain autonomy and oversight, mitigating over-reliance on a few large, mostly American, tech companies.
- Model builders like Mistral AI assert that open(ish) models are vital for transparency and societal benefit, though there's persistent tension between openness and commercial competitiveness.
- Academia and international working groups emphasize that while all models may not be open, benchmarking and evaluation tools must be, to ensure broad, unbiased assessment and trust.
- Panelists stressed the vital role of community-building and local engagement in sustaining open-source ecosystems and ensuring the uptake of trustworthy AI tools.
Implementing AI in Public Healthcare: Data, Ethics & System Readiness | India AI Impact Summit 2026
The session at the India AI Impact Summit 2026 focused on moving beyond generalities to examine how AI is truly being implemented to drive healthcare innovation at the last mile in India. Key stakeholders from diverse organizations shared practical insights and challenges gained from their grassroots implementation of AI-enabled solutions for maternal and child health, risk prediction, and frontline health worker support. The panel highlighted persistent gaps in data representation (notably the mismatch of imported AI models to local populations), infrastructural and human resource limitations in low-resource settings, and the critical disconnect between predictive AI output and actionable health interventions. Detailed field experiences were shared, like the rollout of AI-powered learning management and chatbot support for auxiliary nurse midwives (ANMs) under the Arman initiative, and WhatsApp-based LLM chatbots for Asha workers by Kushi Baby. These deployments have already impacted tens of thousands of healthcare workers across several Indian states, with early evidence of high user satisfaction and the need for “tech plus touch” (AI systems backed by live human support). The session underscored the importance of contextual adaptation, ongoing monitoring, and collaboration between implementers, policymakers, and healthcare workers to ensure sustainable AI impact. The discussion set a forward-looking agenda to prioritize real-world health outcomes, transparent evaluation, and iterative improvement of AI tools, explicitly recognizing both limitations and the transformational potential of technology when paired with system-level readiness and human-centered design.
- Over half of clinical AI models currently use training data from the US or China, resulting in a 23% higher false negative risk for pneumonia and greater diagnostic errors in Indian populations, especially for conditions like melanoma in darker-skinned patients.
- The Arman initiative has developed a comprehensive, AI-assisted high-risk pregnancy management protocol for 35 conditions, now deployed across 7 states, training 14,000 ANMs through in-person and digital means.
- Arman's AI solution features a large language model (LLM)-based multilingual chatbot for ANMs, now serving 6,000 users and soon scaling to 12,000 by March; it offers tailored, instant responses to frontline queries, processed in local languages and formats (text/voice), with 98% user satisfaction and 97% verified numerical accuracy.
- Crucially, Arman systems retain a human-in-the-loop approach to address cases the chatbot cannot resolve, ensuring reliability and user trust.
- Kushi Baby has implemented WhatsApp-based LLM chatbots for Asha Sahili workers, leveraging years of direct collaboration and needs assessment in Rajasthan to strengthen last-mile healthcare delivery.
- Key infrastructural challenges remain, including scarce computational resources, unreliable connectivity, and varied digital literacy among users, prompting emphasis on app-agnostic, offline-capable, and context-adapted solutions.
- The session emphasized that predictive AI models alone are insufficient without concurrent improvements in supply chains, follow-up infrastructure, trained staff, and actionable pathways to translate diagnosis into real health outcomes.
- India is making deliberate steps to build localized AI training datasets, state-specific health protocols, and scalable digital health worker support systems, pointing to a maturing and context-aware AI health ecosystem.
Education for Social Empowerment in the AI Age
This session from the India AI Impact Summit 2026 focused on the current landscape, innovations, and challenges in applying artificial intelligence (AI) within India's educational sector. Panelists emphasized the urgent need for rapid, outcome-based evaluation methods for AI-driven educational programs, moving away from slow, traditional evaluations toward faster, learning-focused assessments. Notable successes of AI-enabled interventions—such as substantial improvements in numeracy and literacy using adaptive tools—were highlighted, alongside specific case studies from states like Haryana and Telangana. However, panelists cautioned about the risks of unsupervised deployment of AI, advocating for careful piloting, teacher-mediated implementation, and robust measurement systems to ensure efficacy and safety. The need for an AI 'diffusion layer'—addressing infrastructure, teacher support, adaptation for regional needs, and incentive structures—was outlined as crucial for AI's responsible scale and impact in education. Policies such as embedding AI and coding into school curricula from grade one and involving higher education students as support for teachers underscore India's comprehensive approach to preparing students and educators for the AI era.
- Shift from traditional, slow randomized controlled trials to rapid, learning-outcome-based program evaluations, with pilots now completed in under 3 months.
- Case study from Haryana: AI-enabled math program dramatically increased grade-appropriate learning outcomes—78% in math versus 36% in non-tech-driven Hindi instruction.
- Telangana state's groundbreaking policy: AI/coding curricula embedded from grade 1 to 9, taught by regular subject teachers, supported by undergraduate and postgraduate students.
- Introduction of adaptive voice-based AI tools (like AXL and AML) for foundational literacy/numeracy, with eight-month data showing a 42% improvement among struggling students.
- Emphasis on careful AI deployment: positioning AI as a teacher aid, not a replacement, with real-time classroom data collected and controlled by teachers.
- Concerns raised about potential misuse and 'hypnotic' or false outputs from AI; calls for rigorous pre-deployment testing akin to pharmaceuticals.
- Highlighting the importance of incentive structures—including outcome-based funding and teacher/community engagement—to drive adoption and effectiveness.
- Announcement of an upcoming accelerator focused on rapid, rigorous evaluation of AI education interventions in India.
- Call for the development of an AI diffusion layer—adapting infrastructure and solutions (e.g., voice-based over text, data privacy, regional customization) to facilitate effective scaling.
Global Dialogue on AI and Labour Market Resilience
The session at the India AI Impact Summit 2026 addressed the significant and complex implications of artificial intelligence on labor markets globally, with a keynote from Professor Yoshua Bengio, a noted AI scientist. Bengio highlighted widespread concerns over AI's potential to disrupt employment, citing surveys and expert opinions that predict large-scale job losses, especially in white-collar sectors. He stressed the urgency of proactive, precautionary policy responses and international cooperation, emphasizing that 60% of jobs in advanced economies and 40% in emerging economies are at risk due to general-purpose AI. Bengio also spotlighted the unequal distribution of benefits, with advanced AI concentrated in a few countries, thus risking increased global inequity. Following the keynote, economist Bharat Chander presented data showing a 16% relative decline in employment for young, AI-exposed workers in the US since late 2024, with firm-level adoption data revealing India's leading position in AI uptake. Chander underscored the importance of improved data collection to understand the impacts of both AI exposure and adoption, citing discrepancies in employment outcomes and calling for more granular, real-time information. The session transitioned to a diverse panel, including Indian policymakers, tech company leaders, and global experts, focused on how India can leverage high AI adoption rates to both augment work and manage the transition for those adversely affected.
- Professor Yoshua Bengio, Turing Award winner and chair of the International AI Safety Report, outlined major risks and uncertainties regarding AI-driven labor market displacement.
- Surveys indicate over 70% of US workers and over a quarter of UK workers fear AI-driven job losses; some projections forecast up to 50% of entry-level white-collar jobs could be automated by 2027.
- The International AI Safety Report synthesizes work from over 100 experts and is backed by 29 countries, the EU, OECD, and the UN; it estimates 60% of jobs in advanced economies and 40% in emerging economies are exposed to AI risks.
- Declining employment for young workers in AI-exposed jobs has already been observed—a 16% relative decline in the US from late 2024 to 2025, with no reversal.
- Firm-level data shows rapid AI adoption in countries like Israel, Japan, and India, with strong uptake especially in the information sector.
- Disparity exists between AI exposure (which predicts job loss) and AI adoption (which shows mixed or null effects), highlighting measurement limitations.
- Approximately half of workplace AI use may occur via personal accounts, making it difficult to fully quantify enterprise adoption and impacts.
- Advanced AI and related wealth are concentrated largely in two countries, raising concerns of both domestic and global inequality and fiscal strain for less advanced nations.
- Bengio and panelists advocate for international coordination and alliances to address risks, with a specific call for multilateral agreements like the recent Canada-Germany AI ethics pact.
- India is identified as a global leader in AI adoption and now faces the dual challenge of job augmentation and transition planning for negatively affected workers.
Safe and Trusted AI Standards in the Age of Generative AI
The panel session at the India AI Impact Summit 2026 centered on the critical need for safe and trusted AI standardization given the widespread adoption of Large Language Models (LLMs), generative AI, and agentic AI. Experts from standardization bodies, industry, and academia discussed the evolving definitions and dimensions of 'trusted AI', highlighted the rapid progress of international and national standard-setting (e.g., ISO/IEC JTC1 SC42 and India’s BIS), and addressed challenges in applying, certifying, and adapting standards amid fast technological change. Key points included the necessity for sector-specific AI guidance, unique certification challenges under new standards like ISO/IEC 42001, emerging generative and agentic AI risks such as deep fakes and automatic action-taking, and the multi-stakeholder responsibility necessary for responsible AI. The session emphasized that while foundational frameworks and standards exist, there remains a dynamic, ongoing need to refine and expand standards to ensure AI systems remain reliable, transparent, and aligned with human values and societal needs.
- The discussion focused on 'safe and trusted AI standardization' for the age of LLMs, generative AI, and agentic AI, reflecting the importance of responsible AI adoption.
- Trusted AI is multi-dimensional, varying by domain and stakeholder, with current standards (e.g., ISO/IEC JTC1 SC42) providing high-level principles but lacking deep sector-specific guidance.
- International standards bodies (ISO/IEC JTC1 SC42) and India's Bureau of Indian Standards (BIS) have developed foundational and certifiable AI standards, such as ISO/IEC 42001, creating the first certifiable management system for AI.
- Certification of AI systems faces unique challenges due to the rapid technological advancements and requires organizations to adapt continuously within structured frameworks.
- Emergence of generative and agentic AI introduces new risks like deepfakes, hallucinations, IP infringement, and autonomous actions, raising stakes for accountability, transparency, and risk management.
- Responsible AI practices and standards are evolving, with calls for collaborative, multi-stakeholder approaches incorporating technical, regulatory, and ethical dimensions.
- The Indian government's recent release of the India AI Governance Framework integrates standards as a core component for ensuring AI governance, transparency, and accountability.
Launch of India AI Impact Summit 2026 Compendiums | Documenting AI for People, Planet & Progress
The session featured presentations from three innovative startups addressing major challenges in accessibility and healthcare through AI-driven assistive technologies. 'Pas Speak' showcased a portable AI device translating impaired speech into clear communication for those with speech disorders, leveraging India's largest dataset of Hindi slurred speech and aiming for clinical trials by late 2026. 'Wave' introduced an AI-enabled glove designed to teach and practice Braille independently, addressing the shortage of qualified teachers and incorporating multilingual support with international expansion plans. The third presentation, 'Arogya,' introduced an AI system built atop India's robust digital health infrastructure to assist rural doctors by reasoning over patient records, addressing the critical shortage and workload of healthcare providers in non-urban areas. These projects exemplify India's growing AI ecosystem fostering affordable, scalable solutions for underserved populations, while highlighting ongoing needs for clinical partnerships, funding, and regulatory approval.
- Pas Speak unveiled 'PASE', a pocket-sized, real-time AI speech clarifier device for people with speech disorders (including dysarthria and presbyphonia), with a targeted market release in early 2027.
- Pas Speak's framework is patent-pending, has been recognized in national/international forums, and runs on a database built from India's largest Hindi dysarthric speech collection.
- Revenue model for Pas Speak includes Rs. 2,000 per device and Rs. 200 monthly subscription; over 100 potential users on the waitlist and letters of interest from institutions/neurologists.
- Pas Speak currently at Technology Readiness Level 7, preparing for clinical trials and regulatory approval in 2026.
- Wave is the first AI-based wearable glove solution for Braille learning and practice, with six flex sensors mimicking traditional Braille cells.
- Wave's prototype supports multiple Indian languages (Hindi, Tamil, Malayalam), employs haptic and voice feedback, and has secured Indian patent filings and international collaboration (Japanese Braille).
- Wave already piloting in multiple institutes (Delhi, Kolkata), drawing strong instructor and institutional interest.
- Arogya is developing an AI-powered assistant for rural clinics, capable of natural language capture and reasoning over vast digital health records (leveraging India's ABDM and 67+ crore digital records).
- Arogya aims to boost rural healthcare delivery by compensating for lack of support staff and limited doctor-patient time.
- All three startups explicitly seek partnerships, clinical validation, resources, and funding to expand reach and impact.
Gender Empowerment AI and Education AI_Compendium
The transcript details a session at the India AI Impact Summit 2026, showcasing innovative tech solutions by finalists addressing critical societal issues through AI-driven assistive technologies. The session opens with remarks celebrating the diversity and impact of over 2500 applicants, emphasizing inclusivity and commitment to continued engagement with all participants, not just the finalists. Three finalist teams are highlighted: one presents a pocket-sized AI device called Palpat that converts impaired speech to clear speech, targeting over 650 million people globally with speech disorders and backed by early institutional interest; the second, Wave, introduces wearable gloves with sensors and AI-driven feedback to revolutionize braille learning for the visually impaired in India and beyond, already patent-filed and in advanced prototype stages; and the third team, Arugia, proposes an AI medical assistant to streamline rural doctor-patient consultations, aiming to address disparities in access to clinical expertise. Each solution combines cutting-edge AI, hardware innovation, and localization to tackle deeply entrenched challenges in accessibility, healthcare, and inclusion, with clear pathways to scalable impact.
- Over 2500 applications were received for the AI Impact initiative, with an emphasis on supporting all participants beyond the final 20 teams.
- Finalist team 'Palpat' showcased a pocket-sized device leveraging cloud-based AI to convert impaired speech into clear speech, targeting disorders like disarthria and presbyphonia, with 100+ pre-registrations and plans for 2027 market launch at ₹2,000/device plus ₹200/month subscription.
- Palpat’s IP status includes patent-pending technology and partnerships initiated with care centers and neurologists, seeking further clinical collaboration and funding.
- Team 'Wave' introduced wearable AI-powered gloves for braille learning, filing for design and utility patents, and already piloting in institutes in India, Kolkata, and with Japanese rail—a first in the global Bharti Braille segment.
- Wave's innovation features multi-language support (Hindi, Tamil, Malayalam), haptic and audio feedback, cloud-based progress monitoring, and a web interface for learners and instructors.
- Wave has attracted interest from educational institutions and started receiving orders; expansion plans include the Middle East and Southeast Asia.
- Team 'Arugia' is developing an AI solution to augment rural doctor consultations, reducing administrative workload and enhancing face-to-face patient interaction amid a shortage of clinical expertise outside Indian metros.
- The session enforced strict presentation timings (5 minutes/team), organized teams into themed panels (e.g., healthcare, edtech, road safety), and demonstrated robust support for technical issues during live demos.
HMEIT Day 2 Press Conference | IndiaAI Impact Summit 2026
The session at the India AI Impact Summit 2026 offered in-depth insights into India's ambitious AI strategy, focusing on sovereign AI development, democratization of AI access, and global leadership in responsible AI deployment. Ministerial remarks emphasized the success and growth of sovereign AI models, comparing favorably with global leaders, and highlighted India's unique approach to democratizing compute infrastructure, particularly through a vast common GPU pool and public digital infrastructure akin to UPI. Announcements included the launch of AI Mission 2.0, focused on robust R&D, innovation diffusion, and increased compute resources. The government articulated its balanced regulatory philosophy—eschewing purely legalistic approaches for technolegal frameworks designed to both foster innovation and mitigate harm. Partnerships with global tech leaders (including Nvidia), major GPU investments, and strong engagement with nations across the Global South were highlighted as India's differentiators. Repeatedly, India’s aim to provide scalable, multilingual, and frugal AI models and solutions—to empower MSMEs, startups, students, and the broader global community—was made clear. Issues of sustainability, energy usage, and job anxieties were acknowledged, with ongoing initiatives in clean energy and workforce upskilling reference. India’s nuanced regulatory policy seeks to balance innovation and regulation, while proposals for structured norms for OTT platforms and content attribution are under consideration. The summit underscored India's intention to control its AI destiny, position itself as a trusted AI provider to the world, and avoid the pitfalls of over-concentration seen elsewhere.
- AI Mission 2.0 launched, focusing on R&D, innovation diffusion, and strengthening AI compute infrastructure.
- India’s sovereign AI models have been rigorously benchmarked, outperforming many global models on key parameters.
- India currently has 38,000 GPUs in its common compute layer with an additional 20,000 GPUs to be ordered and deployed within six months.
- Stanford ranks India among the top three global AI nations.
- India’s AI infrastructure is designed for democratized access, differentiating from global models dominated by private conglomerates.
- AI UPI-like bouquet of trusted, scalable, multilingual AI solutions to be created for global and MSME access.
- Strong policy emphasis on a technolegal approach to AI regulation, integrating technical solutions with regulatory frameworks.
- Government engaging with over 30 countries and collaborating with global leaders (e.g., Nvidia) on AI infrastructure and innovation.
- Major investments in clean power for sustainable AI data centers, with startups developing technologies to reduce AI infra power consumption by up to 35%.
- Commitment to upskilling and reskilling the IT workforce in response to potential job disruptions from AI adoption.
- AI deployment across all sectors, with specific successes in healthcare, education, and public infrastructure.
- Balanced regulatory approach modeled after India’s DPDP Act, promoting innovation while mitigating harmful impacts.
- Plans in place for structured norms for OTT platform compliance and AI-generated content attribution.
- Strong engagement and model sharing with Global South countries, aiming for inclusive AI growth and leadership.
- Semiconductor (Semicon 2.0) and 'Create in India' missions launched, emphasizing design and indigenous deep tech startups.
From Models to the Masses: AI for Climate Resilience
The session at the India AI Impact Summit 2026, jointly hosted by the Council on Energy, Environment and Water (CW) and Google, focused on AI-driven climate resilience and sustainable food systems in India. Highlights included the introduction of an upcoming Climate Resilience Atlas digital public good, which synthesizes complex, siloed climate data into actionable insights for hyperlocal planning and disaster preparedness across sectors such as agriculture, water, and urban systems. The session showcased advanced AI and geospatial tools for granular risk analysis, enabling real-time, plot-level agricultural monitoring and facilitating direct, timely benefit transfers to farmers. Persistent barriers to crop diversification—especially insufficient incentives and lack of trust—were illuminated, with panelists demonstrating how AI can address verification bottlenecks and fraud mitigation. Experts from academia, government, and tech discussed moving AI climate tools beyond pilots into large-scale, government-integrated systems, with particular focus on disaster management and impact assessment. The key message underscored the opportunity to bridge the 'last mile'—making AI-based climate intelligence accessible and actionable for vulnerable communities, local officials, and frontline sectors.
- CW and Google will soon launch the Climate Resilience Atlas, an interoperable digital public good providing modular, multi-layered climate and sectoral risk data for 300+ Indian cities.
- The platform features a conversational AI agent capable of natural language queries, delivering insights in tables and charts for decision-making.
- Advanced geospatial AI enables building- and neighborhood-level risk assessments, critical for infrastructure adaptation and precise disaster management.
- A preview and early beta access to the Climate Resilience Atlas product was offered via QR codes to session attendees.
- State-wide review of crop diversification policies highlighted that direct incentive payments are often insufficient and poorly targeted, with trust and verification gaps hampering scaling.
- AI-powered agricultural tools like Google’s AMED and anthrokashi are enabling real-time crop monitoring, automated payment transfers, and accurate beneficiary identification.
- Panelists called for deeper integration of AI within digital public infrastructure (DPI), moving climate intelligence from pilot projects into scaled government systems.
- Key areas for AI impact identified included disaster management (e.g., flood/landslide models) and impact assessment, with transition from site-specific to large-scale, actionable risk profiling.
How to Build Secure AI: Essential Development Guidelines
The session provided a comprehensive overview of the UK's approach to AI cybersecurity, emphasizing the role of international collaboration and industry-standard technical frameworks. The speaker described the UK's involvement with various standards bodies (such as ISO, ITU, Etsy, 3GPP, W3C, and IETF) and highlighted the importance of consensus-driven, multistakeholder processes in developing robust and compatible technical standards. The discussion focused on addressing evolving cyber threats specific to AI systems, including adversarial attacks, data poisoning, and model inversion, and underscored gaps in decommissioning and end-of-life security practices for AI. A highlight was the announcement of 13 baseline security principles mapped across five phases of the AI lifecycle, with a mixture of mandatory and voluntary provisions, aimed at safeguarding the full AI supply chain. The UK's proactive stance is to ensure that AI systems, at every stage, meet minimum security standards while still being accessible for a range of organizations. The session also acknowledged the persistent need for adaptation as AI threats evolve and the diverse viewpoints among governments and industry on topics like privacy and internet governance.
- The UK collaborates internationally on AI and cybersecurity standards, including with nations in the Global South.
- Partnerships extend beyond governments to academia, industry, and the public.
- The UK engages with prominent standards bodies: ISO, ITU, Etsy, 3GPP, W3C, and IETF.
- Standards ensure compatibility, interoperability, and baseline security for global digital products and services.
- Unique AI-specific cyber threats highlighted: adversarial inputs, data poisoning, model inversion, and prompt injection.
- Traditional cyber threats still apply to AI systems; new threats require evolving mitigation strategies.
- The UK has developed 13 foundational security principles for AI, covering the entire lifecycle from design to decommissioning.
- Principles include mandatory ('shall') and voluntary ('should') security provisions.
- Emphasizes securing the entire AI supply chain, not just developers and deployers.
- Noted lack of existing guidance for decommissioning AI systems; UK initiative aims to address this gap.
- Stressed need for continued adaptation of standards and advice as the AI threat landscape changes.
- Highlighted ongoing tensions and debates between governments and industry, especially regarding privacy and internet regulation.
Transforming Healthcare with AI Innovations
The session at the India AI Impact Summit 2026 focused on the intersection of AI and healthcare, highlighting both grassroots innovation and sweeping policy changes. SEP, founder of Polygon and leader of the BFI (Billionaire Foundation Initiative), detailed BFI's evolution from urgent pandemic relief to building a robust, long-term medtech and AI innovation ecosystem in India, including direct support for researchers and startups, with a special emphasis on bridging gaps in sales and scaling for non-software health solutions. Karthik, regional adviser for digital health and AI at the World Health Organization, underlined India's mounting push to become a global leader in AI for health, recognizing core challenges such as fragmented data and the need for foundational digital infrastructure. He announced that India would soon launch the first comprehensive AI in health strategy in the global south, promising regulatory guardrails, safer and more equitable tools, and robust data standards for scale. The approach includes focus on building interoperable data systems, supporting indigenous innovation, elevating standards for AI tool deployment, and translating pilot projects to real-world population-level impact with structured policy guidance and institutional review. The panel collectively reinforced a vision for AI-driven healthcare that is safe, scalable, indigenous, and fully integrated with global best standards, pointing to an immediate future marked by policy-driven enablement and real societal impact.
- BFI (Billionaire Foundation Initiative), originally formed for COVID relief, now supports long-term healthcare innovation in India across 45-50 researchers and 55+ startups, with 15-16 focusing on AI.
- BFI has created a 'full-stack' support pipeline for medtech and healthcare startups, including help with clinical trials, product development, and market distribution.
- BFI played a crucial role during India's COVID vaccination drive, providing around 16 crore syringes, covering about one-third of all vaccinations at the time.
- Emphasis on indigenous, innovative healthcare products rooted in upstream research and partnering with major institutes and government bodies (ICMR, IITs, PMO, ANRF).
- Supportive policies now aim to bridge the transition from patents to actual on-the-ground products and services.
- WHO adviser Karthik announced the imminent launch of India's first comprehensive national AI in health strategy – the first such sector-specific strategy in the global south.
- The strategy includes strong governance: data interoperability standards (OMOP, FHIR), longitudinal electronic records (leveraging Aayushman Bharat Digital Mission), and built-in safety, equity, and ethical frameworks for AI health tools.
- Focus on addressing 'pilotitis' by ensuring successful pilots are scaled to real-world impact via governance, evidence generation, and implementation science.
- India to promote 'made in India', at-scale AI healthcare solutions, positioning itself alongside world leaders such as the US, UK, France, Japan, and South Korea in health-tech innovation.
- A call for stakeholders to participate in the strategy launch and its implementation (official launch at the conference, today at 4:30 pm).
Inclusive AI for Citizen Services: Language and Last-Mile Impact
The session at the India AI Impact Summit 2026 centered on the democratization of AI, emphasizing how India can harness artificial intelligence to benefit its diverse and multilingual population. With participation from leading figures in government digital transformation (such as Amitabh, CEO of Digital India Bhashna), global technology enterprises (Kalista Redmond, Vice President, Nvidia), and industry/NGO representatives, the discussion explored how digital language infrastructure, inclusive AI development, public-private partnerships, and sovereign AI models can bridge societal divides. Key topics included the vital importance of language as digital infrastructure, the challenges and methods for creating AI models that represent India's cultural and linguistic diversity, leveraging crowdsourced and collaborative approaches for model development, and balancing local innovation with participation in the global AI ecosystem. The discourse underlined the need for dependable multilingual AI platforms, locally relevant datasets, and collaborative standards that ensure AI's reach and utility for every Indian citizen, regardless of geography, socioeconomic status, or language spoken.
- India's AI democratization must address 1.44 billion people across more than 200 spoken languages, of which 22 are official.
- Treating language as national digital infrastructure involves multi-layered approaches: from corpus/data, models, to applications, supported by robust platforms and well-defined standards.
- The Digital India Bhashna AI initiative uses real-time multilingual translation and promotes collaboration between government, research institutions, academia, and industry.
- Crowdsourcing and ecosystem-based model development (rather than single-entity in-house building) are vital to aggregating local datasets and ensuring contextual accuracy and inclusion.
- Diversity is positioned as 'the new standard' in AI, requiring systems to be interoperable yet flexible enough to accommodate India's vast linguistic and cultural variety.
- Sovereign AI is interpreted variably across nations but generally aims to reinforce national strength while engaging with the global economy; local model development and training are essential but must be cross-fertilized with global LLMs and datasets.
- Public-private partnerships (governments, global companies, startups, academia) are seen as essential for national-scale AI programs and building the broader AI ecosystem.
- India's digital public infrastructure efforts (such as Aadhaar) provide a blueprint for population-scale technology solutions.
Digital Public Goods for Global AI Equity
This session at the India AI Impact Summit 2026 explored critical challenges and opportunities in democratizing AI through better data access, open standards, and collaborative ecosystems. Panelists highlighted the 'data desert' challenge in India—where much public data is siloed, fragmented, and poorly documented—limiting innovation, especially for startups and smaller players. The session also addressed the lack of culturally relevant benchmarks for African and other global majority languages, stressing that standard evaluation tools often do not capture necessary nuance. Organizations like Masakhane and the Sahara project are actively working to close benchmarking gaps and ensure datasets are reusable, sustainable, and community-led. Speakers underscored that openness alone is insufficient unless robust governance, guardrails, and equity considerations are established to avoid dominance by large tech companies. Initiatives like Current AI and efforts by Mozilla aim to foster a collaborative, community-driven approach to AI development and deployment, advocating for digital public goods that benefit diverse populations and prevent concentration of power in a few hands. The conversation emphasized that collective action, inclusivity, and sustainable open access to data, models, and benchmarks are crucial to realizing AI's full potential for societal good.
- India faces a 'data desert': public data is siloed, lacks provenance/metadata, and is not easily accessible or usable for AI innovation.
- Most available datasets (e.g., on the national AI Kosh platform) have very low utilization, highlighting the data access challenge for startups.
- Opening data access is critical but must be accompanied by safeguards to prevent capture and misuse by large, resource-rich entities.
- Standard AI language benchmarks (e.g., Helm, MMLU) do not represent African languages' diversity or context, impeding equitable AI development.
- Masakhane and Sahara initiatives are creating benchmarks for 40 African languages, focusing on reusability and context-appropriateness, supported by a call for open, sustainable, community-led evaluation frameworks.
- Current AI, launched at the 2025 Paris AI Action Summit, aims to create a collective, collaborative global structure for AI, avoiding concentration of power in a handful of corporations.
- Open source is positioned not just as a technical solution but as an ideology and methodology for large-scale collaboration and knowledge sharing in AI.
- Mozilla is working to replicate its open internet success in the AI space by supporting grassroots, community-driven AI projects and preventing power concentration in 'frontier labs'.
- Digital public goods alliances, involving major stakeholders like Mozilla and the French government, are driving global collaboration around open, accessible, and impactful AI tools.
- The session emphasized that openness, sustainability, inclusivity, and strong governance are all essential for AI to serve global societal needs.
"AI Transforming BFSI: Practical Strategies & Safe Scaling | India AI Impact Summit 2026"
The session provided a comprehensive overview of the accelerating adoption of AI in India's financial services sector, particularly focusing on the transition from basic chatbots to more sophisticated agentic systems, the importance of multimodal models, and the rise of context engineering as a critical success factor. Citing survey data across 170 BFSI (Banking, Financial Services, and Insurance) customers, the speaker emphasized that nearly 90% are actively adopting AI, with 60% in advanced stages. A significant highlight was the announcement of a 1-gigawatt AI data center to be built over the next five years, demonstrating the industry's vertical integration from infrastructure to applications. The session showcased Tata Capital's multimodal, voice-based AI platform that addresses India's linguistic diversity, underlining the need for enterprise-specific and fine-tuned AI models for enhanced compliance, accuracy, and customer experience. While momentum is strong, challenges remain with ROI realization, integration at enterprise scale, regulatory compliance, and domain-specific fine-tuning. The presentation concluded by advocating a hybrid, platform-based AI strategy with rigorous context engineering, and illustrated real-world deployments for customer engagement and claims processing.
- AI adoption in India's BFSI sector is in a hyper-acceleration phase, moving from basic chatbots to agentic, action-oriented systems.
- A major survey of 170 BFSI customers found nearly 90% have adopted AI in some form, with 60% reporting advanced adoption stages.
- 50% of respondents indicate a need for more enterprise-specific, fine-tuned AI models, especially due to regulatory requirements.
- Announcement: Launch of a 1-gigawatt AI data center, to be built over the next five years, supporting vertical integration from hardware to AI applications.
- Tata Capital introduced a multimodal, voice-first AI solution serving millions, tailored to India’s multilingual, voice-led market.
- The shift towards multimodal AI, combining various models and input types, is seen as the future for process automation.
- Context engineering is critical—carefully structuring and fine-tuning AI for domain-specific accuracy and trust.
- Key AI deployment areas include customer service, coding assistance, compliance, claims processing, and investment advisory.
- Persistent challenges include ROI measurement, integration with enterprise systems, ensuring compliance, and addressing linguistic diversity.
- Hybrid AI strategies combining LLMs and domain-adapted models are forecasted, with fine-tuning expected to accelerate in BFSI over the next 6–9 months.
- Effective enterprise-wide impact requires platform-based approaches and workflow orchestration, not just isolated use cases.
AI and the Future of Creativity: Power and Public Imagination
The session 'AI and the Future of Creativity' at the India AI Impact Summit 2026, organized by Mozilla Foundation and G5A, convened an esteemed panel comprising Zad Borat (Mozilla), Nikhil Advani (filmmaker, G5A), and Saran Vighraham (Meta). Moderated by Diva from the Collective Intelligence Project, the discussion critically examined the intersection of AI technologies and creative industries. Speakers reflected on how lessons from the open internet era—such as collective governance, legal frameworks, and the democratization of creative contribution—offer both cautionary tales and opportunities as generative AI reshapes creativity. The panel stressed the urgency of addressing structural issues ranging from copyright and compensation to platform incentives and artistic intent. While recognizing AI's potential for democratising creative tools and giving voice to new entrants, the conversation repeatedly warned of the risk that economic incentives and algorithmic homogenization might stifle original artistic intent and flatten cultural diversity. Panelists urged policy interventions focused not just on after-the-fact regulation but on proactive structural changes—like data governance, labor and tax policy reform, and embedding transparency and community governance in AI design—to ensure AI truly enhances human creativity rather than undermining it.
- The panel drew parallels between the early open internet and the current rise of generative AI, noting both the flowering of creativity (Wikipedia, Creative Commons) and exploitation (algorithmic engagement farming, loss of creator compensation).
- AI democratizes access to creative tools, enabling individuals from remote regions to create films and other works once reserved for a privileged few.
- Risks discussed included the erosion of artistic intent and originality due to misaligned economic or algorithmic incentives, and the potential for a cultural monoculture.
- Panelists highlighted unfinished frameworks around legal issues, compensation, and representation for creators in AI-generated works.
- Policy recommendations included: making 'imagination' visible as a critical layer in the AI stack; designing structural interventions beyond copyright/IP; aligning economic incentives towards fair compensation and original expression; and supporting community-driven governance and licensing.
- Speakers urged proactive engagement with legal, tax, and labor policies to support independent creators and cultural institutions, rather than waiting for courts or reactive regulation.
- The session framed the current moment as a critical fork in the road: either AI can foster equitable, diverse creative flourishing—or repeat and magnify past harms.
- Originality and artistic intention were emphasized as irreplaceable by technology; panelists questioned whether AI systems can ever replicate the deeper meaning and intention behind great art.
Efficient AI Infrastructure for India
The session focused on the rapidly escalating energy demands driven by AI data centers, especially in India, which is on the cusp of significant hyperscale data center growth. Experts from Aurora Energy Research and the International Energy Agency outlined the doubling of global data center electricity consumption by 2030, contextualizing the uniquely intensive and location-specific strain these centers pose on local grids. India, with clusters concentrated in Mumbai, LNCR, and Chennai, currently has around 1.1 to 1.5 GW of IT load, projected to reach 6 GW by 2030—a threefold growth mainly due to AI inference and training centers operating at high load factors. Present and emerging global regulatory responses such as Power Usage Effectiveness (PUE) mandates, renewable Power Purchase Agreements (PPAs), and the push towards hourly renewable matching were discussed. With construction timelines for data centers far outpacing conventional energy grid upgrades, the session emphasized the need for urgent coordinated policy, efficiency standards, and grid planning to ensure sustainability and resiliency as India’s AI and data infrastructure buildout accelerates.
- AI is fueling one of the fastest-growing sources of global electricity demand; data centers now consume about 2% of global electricity, expected to double to 4% by 2030.
- India’s current IT connected load for data centers is 1.1-1.5 GW, projected to hit 6 GW by 2030—a tripling in less than a decade.
- Hyperscale and AI-focused data centers can consume as much as electricity as 100,000 to 2-3 million households, exerting substantial local grid and water stress.
- Data centers typically operate at 85-90% load factors with flat 24/7 demand, constituting new baseloads for urban grids.
- Construction time for data centers (12-18 months) is much shorter than for necessary grid upgrades (5-10 years), creating infrastructure mismatch and planning challenges.
- Power Usage Effectiveness (PUE) regulation is already mandated in several countries (Australia, China, Japan, France, Germany), but not yet in India.
- India’s climate necessitates data center-specific cooling efficiency measures; existing standards are not suited for high-heat environments.
- Over 90% of data centers remain grid-connected, making renewable PPAs an incomplete solution for 24/7 clean energy goals.
- Emergent sustainability trends include hourly renewable energy matching, and enhancing data center operational flexibility to align with grid and renewable availability.
- Indian policymakers are beginning to consider PUE standards and other efficiency protocols to manage the coming surge.
Launching the IndiaAI Study: Advancing AI Readiness in Manufacturing MSMEs
The session at the India AI Impact Summit 2026 marked the formal launch of a comprehensive study focused on accelerating AI adoption in India's manufacturing sector, particularly among Micro, Small, and Medium Enterprises (MSMEs). Senior government officials and industry leaders emphasized the critical importance of deploying AI solutions that deliver tangible gains in productivity and competitiveness for smaller firms across textile, pharmaceutical, and electronics sectors. The initiative, spearheaded by the India AI mission in partnership with the National Institute of Smart Governance (NISG) and Athena Infomics, addresses key challenges such as affordability, readiness, and accessibility of AI-powered tools. The study aims to fill crucial market intelligence gaps—on both demand (MSMEs) and supply (technology providers) sides—by conducting 350+ factory immersions, directly engaging with stakeholders on the ground, and mapping the path towards scalable, customizable AI applications. The session underscored India's potential to serve as a global exemplar in democratizing AI-driven industrial innovation, leveraging frugal solutions to realize inclusive, wide-scale digital transformation for its vast MSME ecosystem.
- A major study was officially launched to assess and accelerate AI adoption in India's manufacturing sector, especially among MSMEs.
- The study is a collaboration between the India AI mission, NISG (National Institute of Smart Governance), Athena Infomics, and industry ministries (including MSME, Textiles, Pharmaceuticals, and Electronics).
- Focus initially centers on the textile, pharmaceutical, and electronics sectors—three high-growth or traditional strongholds for India.
- The study will involve more than 350 factory immersions to gather firsthand insights into readiness, barriers, and opportunities for AI adoption.
- Key objectives include evaluating cost/benefit of AI for MSMEs, identifying early adopter clusters, developing a 'shelf' of practical AI solutions (ERP, CRM, production tools), and supporting MSMEs through possible government incentives.
- Policymakers highlighted the importance of making AI tools affordable and accessible, so smaller firms remain competitive and avoid being left behind.
- The program aims to fill market intelligence gaps—facilitating informed, scalable adoption pathways by engaging both MSMEs (demand) and tech providers (supply).
- India is positioning itself as a global leader in inclusive, frugal AI adoption, with the potential to export learnings and models to other countries.
- Commitment to involve both industry and AI service providers from the outset, ensuring solutions are practical and scalable.
- A design and dissemination strategy ensures that study results will be broadly shared, supporting evidence-based policymaking and industry transformation.
How AI is Transforming the Digital Health Ecosystem | India AI Impact Summit 2026
The session at the India AI Impact Summit 2026 brought together senior leaders from organizations such as the World Bank, Access Health International, and prominent research institutions to discuss the critical enablers, strategies, and challenges for the responsible and impactful integration of AI in healthcare systems, especially in India and the Global South. The discussion emphasized the foundational role of robust digital infrastructure, investments exceeding $4 billion in digital health systems, the development of policy and implementation frameworks like ABDM, and the importance of data quality and human-centric approaches. Panelists highlighted notable progress, such as the upcoming national AI strategy for health, and persistent challenges around infrastructure, capacity building, ethical and social considerations, scalability, and data reliability. Success stories of scaling AI solutions through multi-stakeholder partnerships were shared, alongside examples of responsible deployment leveraging 'human-in-the-loop' models to maintain clinical oversight and build frontline confidence. In the realm of biomedical research, AI is being harnessed for preventive healthcare and personalized interventions, with a push to translate lab innovations into real-world health outcomes. The panel concluded with an urgent call to address infrastructure gaps, foster local innovations, promote ethical standards, and scale successful pilots for lasting AI-led transformation in health systems.
- World Bank has invested over $4 billion in foundational digital health systems, focusing on interoperability, data quality, and infrastructure.
- A national AI strategy for health, developed over 1.5 years through 150+ consultations, is set to launch, reflecting broad stakeholder engagement.
- Access Health International has played a pivotal role in the design and policy framework of the Ayushman Bharat Digital Health Mission (ABDM), tracking its five-year impact and developing AI implementation research frameworks.
- Challenges highlighted include infrastructure gaps (especially in the Global South), limited capacity building, lack of scalable pilots, insufficient local innovation support, ethical/social issues, and poor data quality.
- Concrete examples shared of AI health tools developed by NGOs being scaled up to populations of 50-100 million through partnerships with state governments.
- Responsible AI deployment in healthcare prioritizes explainability, clinician oversight ('human-in-the-loop'), and building frontline confidence, such as remote review protocols for cervical cancer screening.
- Biomedical research is leveraging AI for preventive healthcare, personalized region/culture-specific interventions, and disease risk stratification, aiming to translate discoveries from lab to real-world impact.
AI for ALL Challenge & Panel on Leveraging AI for Development in the Global South
The transcript from the India AI Impact Summit 2026 captures a dynamic and collaborative landscape for artificial intelligence in India, marked by strong government policy, notable public-private partnerships, and a focus on democratizing both access and opportunity for AI innovation across all socio-economic strata. Key initiatives, such as the government-supported AI Kosh and data platforms with thousands of active users, illustrate India's emphasis on data quality, inclusivity, and stakeholder collaboration. The panelists highlighted India's unique approach to AI—centered on democratization, ethical growth, and mass empowerment—backed by mission-mode policies and a robust startup ecosystem spanning from tier-1 cities to remote villages. Ongoing programs like the Atal Innovation Mission and new platforms like Ideabas are providing tools, mentorship, funding, and exposure to startups, while investors and industry leaders stress the importance of solving local problems at scale and fostering a competitive yet collaborative spirit, especially for the global south. The session reinforces India's rapid progress in global innovation and AI application, underscoring a vision where technology is a tool for inclusion and nationwide transformation rather than replacement.
- Emphasis on data quality as foundational for AI development, with government platforms like AI Kosh serving as ecosystem drivers.
- Delhi Metro and other examples showcase over 6,000 active users and 300+ teams in AI innovation challenges, with startups deploying AI for transit safety.
- The Indian government has launched multiple missions (e.g., Atal Innovation Mission) to shift from job seekers to job creators, covering 1.44 million schools and 10,500 engineering institutions nationwide.
- India's global innovation index ranking improved from 81 to 38 over the past seven years due to mission-driven policy and inclusive outreach.
- AI is revolutionizing the technology adoption curve, prompting urgent mission-mode actions under frameworks like the 'Seven Chakras' of India AI.
- India focuses on democratizing AI for social and economic inclusion, especially targeting rural, agricultural, MSME, and marginalized populations.
- Platforms like Ideabas are conceptualized to democratize access to AI opportunities across tier-2, tier-3, and tier-4 cities, regardless of background.
- Panelists emphasize collaborative approaches, involving government, investors, VCs, and the broader ecosystem, as essential for scaling AI and integrating ethical values.
- India's proven success model (e.g., UPI for fintech) demonstrates the effectiveness of public-private collaboration, now being applied to AI.
- Investors see India's acute, population-specific problems as fertile ground for unique, globally significant AI solutions, particularly in education, healthcare, and agriculture.
- Importance of mentorship, business development support, and teamwork for startups—especially crucial given rapid change and local challenges in the global south.
- Over 35,000 participants engaged in the India AI mission, evidencing mass movement and nationwide excitement for AI.
Women in Climate and AI: Bridging the Innovation Gap
This panel at the India AI Impact Summit 2026 explored the intersection of gender, climate action, and artificial intelligence, highlighting efforts from leading organizations to address deep-seated inequalities in the workforce and access to financing for women. Representatives from Salesforce, IFC, UN Women, and Resilience AI discussed their tools, policies, and programs promoting women’s economic empowerment, particularly in climate tech and AI. Salesforce showcased data-driven platforms to improve equity tracking and decision-making. IFC outlined ambitious gender-tagging of investments, including risk-sharing mechanisms and detailed KPIs to address gaps in access to capital for women entrepreneurs. UN Women emphasized institutional solutions and data collection with intentionality to make climate and AI initiatives inclusively gender-responsive. Resilience AI illuminated best practices for ground-level AI deployment in disaster management, ensuring diverse representation in code and practice. Throughout, the session stressed the need for robust, gender-disaggregated data, intentional programming, and ecosystem-wide standards to meaningfully close the gender gap in climate and technology sectors.
- Salesforce's NetZero Cloud and AgentForce tools enable organizations to track workforce gender representation and wage gaps at all levels, and to enforce gender-based guardrails and KPIs in sustainability and investment decisions.
- Only around 30% of South Asian women participate in the labor force and are 30% less likely to own mobile phones than men, highlighting significant structural barriers.
- IFC has set a mandate that 60% of all investments and advisory services must be 'gender tagged,' embedding strict gender-related KPIs—such as female leadership, job creation, and access to technology and credit.
- IFC is running accelerator programs (e.g., 'She Wins') to channel more funding into women-led climate startups, addressing a global $1.7 trillion credit gap and the fact that less than 3% of VC funding currently supports women-led startups.
- UN Women is driving institutional solutioning for gender parity, promoting the ‘WEPS’ framework and ensuring data collected through supported programs is gender inclusive and actionable.
- UN Women recently completed climate tech entrepreneurship support programs across South and Southeast Asia, integrating gendered data collection and impact frameworks.
- Resilience AI, a company founded by four women, provides rapid disaster exposure analysis using AI and ensures women are both represented in the data and actively participate in model development, emphasizing the non-removal of humans (of all genders) from the AI decision-making loop.
- Sessions underscored that intentionality in program design—and data that is both gender-tagged and used for actionable, equity-focused decisions—are critical leverage points for inclusive climate and AI sector development.
Scaling Trusted AI: Global Practices, Local Impact
The session highlighted India's rapid progress in AI integration across multiple sectors, with AI projected to contribute up to $500 billion to India's GDP by 2030. A striking 47% of Indian enterprises already have AI in production, and 74% have accelerated their AI implementation in the last two years. The speaker emphasized that while AI capability is important, the coming era will be defined by trust—measurable, demonstrable, and operationalized trust in AI systems. India has taken a pioneering lead with the release of AI governance guidelines and the development of the Technolleal framework, ensuring governance is embedded throughout the AI lifecycle. Key challenges remain around translating policy into operational standards, developing new professional roles in AI governance, and creating interoperable benchmarks for global trust. The speaker announced the launch of a Global Compendium of Contextual AI Use Cases and the introduction of the first global AI Governance Insights Hub by Credo AI, offering a comprehensive, continuously updated resource on policy, risks, controls, and vendors. These initiatives are aimed at equipping enterprises and startups with actionable tools to navigate complex regulatory environments and build trustworthy AI at scale.
- AI is expected to add $15.7 trillion to the global economy and $500 billion to India's GDP by 2030.
- 47% of Indian enterprises have multiple AI use cases in production; 74% accelerated AI rollout in the last two years.
- Major AI applications in India span agriculture, banking, and rural healthcare.
- India is leading not just in building AI, but focusing on responsible, trustworthy AI deployment for its 1.44 billion citizens.
- India’s legacy of trusted digital infrastructure, like Aadhaar and UPI, sets a precedent for AI governance.
- India has released AI governance guidelines based on 'seven sutras' for responsible AI, forming a solid foundation for future policies.
- The Technolleal framework introduces 'governance by design', ensuring continual measurement and verification of trust across the entire AI lifecycle.
- Collaborative efforts between the Indian government and technology leaders are shaping purpose-built, sector-specific governance models.
- Operationalizing policy into practice demands investment in AI governance professions and continuous monitoring tools.
- Speaker’s company (Credo AI) launched the Global Compendium of Contextual AI Use Cases with contributions from Mastercard, Autodesk, G42, AP, and Cisco.
- Credo AI also announced the launch of the Global AI Governance Insights Hub—an adaptive, AI-powered, single source of truth for global AI governance policies, risks, and vendors.
- The hub features three pillars: global policy tracking, risk taxonomy (covering 16 risk types), and vendor/model intelligence.
- AI governance is reframed as a market advantage, not just regulation—as organizations with demonstrable trust in their AI win contracts and market access.
- A whole new category of AI governance jobs is emerging, focusing on risk verification, ethical oversight, stakeholder management, and trust assurance.
- India’s approaches and tools are positioned as potential benchmarks for global AI governance, especially for the Global South.
Democratising Predictive AI for MSMEs and Public Systems
The panel at the India AI Impact Summit 2026 brought together leaders from industry (Swiggy), technology consulting (Zinga Labs), and venture capital (Campus Fund) to discuss the real-world impact and democratization of predictive intelligence in India. They emphasized that predictive intelligence is now an operational necessity rather than a competitive advantage, affecting sectors as diverse as supply chains, public health, and education. While foundational AI models and advanced analytics are revolutionizing large enterprises, the consensus was that the benefits remain unevenly distributed—often bypassing SMEs and public sector organizations due to challenges around data standardization, digitization, and access to AI infrastructure. Panelists highlighted the need for robust data infrastructure initiatives like 'AI Kosh'—akin to how UPI transformed finance—to make predictive capabilities accessible to all. Building public trust in AI was highlighted as crucial, to be achieved through demonstrable early wins in non-critical applications, government-supported open data stacks, and iteratively experimenting with and communicating AI's real-world successes. Only through this multidimensional approach can India truly democratize AI and unlock its value across both private and public sectors.
- Predictive intelligence has become a baseline operational requirement across industries, not just an advanced feature.
- Small and medium enterprises (SMEs), which represent approximately 30% of India's GDP, are currently disadvantaged due to lack of access to AI and predictive intelligence.
- Public sector examples such as frontline health worker data illustrate the absence of AI-enabled predictive layers that could preempt crises like epidemics.
- Major enterprises benefit first from AI advances, widening the gap with smaller players who lack structured and standardized data.
- Initiatives like 'AI Kosh' aim to create standardized, open data infrastructure for AI, similar to the success of UPI in fintech.
- Democratizing AI requires attention at multiple layers: data standardization, infrastructure access, easy-to-use interfaces, and foundational models.
- Trust in AI is built through successful, demonstrable use cases (especially in less risky domains), iterative experimentation, and government-backed open ecosystems.
- Government initiatives like ANRF are driving the creation of open digital stacks to support startup innovation across multiple public sectors.
- Major barriers remain: lack of data digitization and structure, limited AI expertise in the public sector, and the need to embed AI tools into daily workflows.
Co-Creating India’s AI Ecosystem: CDAC Strategic Partnerships & MoU Exchange
The session at the India AI Impact Summit 2026 highlighted India’s unique AI journey, emphasizing self-reliance, collaboration, and inclusiveness as core themes. The Center for Development of Advanced Computing (CDAC), under the Ministry of Electronics and Information Technology (MeitY), played a central role in driving indigenous AI infrastructure and partnerships. The event celebrated the move from AI ambition to execution via robust AI supercomputing, indigenously designed processors, and deep collaboration among academic, industry, and government stakeholders. Several MoUs and collaborations were acknowledged, with CDAC anchoring mission-driven AI deployments across critical sectors like healthcare, agriculture, cybersecurity, and governance. The summit was distinguished by its scale, with more than 500 sessions, around 900 innovation stalls, and an open, inclusive format designed to democratize technology and drive use-case driven, bottom-up AI adoption. The leadership stressed that India's path in AI is not about replicating global models, but innovating for its own diverse, multilingual context and generating trusted, sovereign AI capabilities.
- CDAC is leading India's AI infrastructure push through high-performance supercomputing (e.g., PARAM series, ARAVAT) and the indigenous DUV64 64-bit dual-core microprocessor.
- AI deployment focuses on mission-scale solutions in healthcare, agriculture, cybersecurity, finance, and public governance.
- Formal partnerships and MoUs were announced with academic institutions, R&D establishments, and industry leaders to spur collaboration on AI, chip design, and application development.
- Emphasis was put on bottom-up, user-driven system development to ensure real-world relevance and impact.
- The summit featured over 500 sessions and nearly 900 stalls, underlining scale, inclusiveness, and accessibility.
- MeitY highlighted India's digital public infrastructure and the critical role of AI in advancing inclusive, multilingual, and sovereign digital solutions.
- CDAC's expanded national network of 12 centers strengthens operational AI deployment and local collaboration.
- Calls for democratization of AI, cross-sectoral partnerships, and a distinctively Indian approach to technology adaptation were repeatedly reinforced by summit leaders.
Using AI to Strengthen Public Service Delivery: Evidence & Impact | India AI Impact Summit 2026
The speaker shared insights from a collaborative research project in Togo, where AI and mobile phone data were leveraged to rapidly and accurately target cash transfers during the COVID-19 crisis. This innovative approach addressed the data scarcity in low- and middle-income countries, effectively identifying the most vulnerable populations more efficiently than traditional methods, especially in rural settings. While the AI-powered targeting system proved highly successful, the same AI-driven approach fell short in accurately measuring the program's impact, underscoring the challenges of relying solely on big data algorithms for complex, short-term vulnerability outcomes. The session emphasized the need for policymakers to blend AI insights with traditional evaluation techniques, carefully consider context, and remain cautious of AI's promises and limitations as governments increasingly turn to data-driven decision-making in crisis and development interventions.
- A research project with the Togo government used AI and mobile phone data to target cash transfers during COVID-19, especially where administrative data was lacking.
- The AI model processed data from 5.83 million subscribers and 1.3 billion calls to identify poor and vulnerable populations for rapid aid delivery.
- Traditional methods, like proxy means tests, were less viable in rural Togo due to data constraints, but AI filled the gap.
- The AI approach significantly improved the speed and accuracy of aid targeting in a crisis situation.
- A randomized control trial evaluated the impact of the cash transfer program, revealing improved food security and mental health via survey data.
- AI algorithms based on mobile data performed poorly in measuring short-run impacts, likely due to limitations in data recency and differences between short-term vulnerability and longer-term economic status.
- The session cautioned against over-reliance on AI, advocating a balanced approach that includes rigorous evaluation and contextual understanding.
- A key policy takeaway: AI can dramatically enhance crisis response efficiency, but must be combined with traditional measurement and evaluation for decision-making.
The AI Regulatory Landscape: Making Sense of Safety & Compliance
The session focused on the challenges and best practices of ensuring AI safety and compliance when operating between India and the European Union (EU). Speakers compared the regulatory environments: the EU has a stringent, dedicated AI Act that categorizes AI systems by risk and enforces robust compliance obligations, while India currently prioritizes innovation with relatively minimal AI-specific regulation. As a result, Indian companies aiming to serve EU markets must adopt risk-based compliance, ongoing risk management, technical documentation, data quality assurance, transparency, human oversight, cybersecurity, and post-market monitoring. Cross-border compliance requires understanding and bridging differences in standards, and proactively addressing overlapping legal obligations. Collaborative efforts, such as EU-funded research centers like Clara, are developing trustworthy AI by integrating technical innovation with compliance and privacy. The session underscored the need for harmonized safety standards amid divergent regulatory landscapes, with both regions placing emphasis on user welfare, trust, and continuous AI system monitoring.
- The EU has implemented the EU AI Act, one of the strictest AI regulatory frameworks globally, categorizing AI systems by risk (high, medium, low/minimum) and imposing corresponding obligations.
- India, with 1.44 billion people, currently lacks a dedicated AI Act, focusing more on fostering innovation rather than regulation.
- Indian companies serving EU users must comply with the EU AI Act, including proper risk classification, ongoing monitoring, and clear technical, security, and transparency measures.
- Key safety parameters for AI compliance include risk classification, continuous risk management, human oversight, transparent user communication, accuracy, robustness, cybersecurity, quality of datasets, and post-market monitoring.
- Cross-compliance (operating in both regions) requires adhering to the most stringent standards applicable and understanding both local and foreign data/protection laws.
- Collaboration across borders is critical; harmonizing or at least recognizing overlapping standards is necessary, especially as India may institute more comprehensive AI regulations in the coming years.
- The session highlighted large-scale EU-funded projects like Clara (over €43 million funding), integrating multidisciplinary technological advances (AI, quantum computing, HPC) with legal and ethical compliance to support trustworthy AI.
- Speakers emphasized privacy protections and trustworthiness as non-negotiable in both India and the EU, with increasing focus on incorporating human oversight and continuous evaluation into the AI development lifecycle.
- Sandboxes and grey areas in the compliance landscape can be opportunities for innovation, provided safety and compliance basics are met.
Expert Dialogue on AI for Health Systems
The session at the India AI Impact Summit 2026, moderated by Samir Pujari from WHO, brought together an elite, multidisciplinary panel representing global organizations, industry leaders, policymakers, investors, and public health experts to deliberate on implementing responsible AI in health governance. Key introductions highlighted experts from ICMR, the Wellcome Trust, Anthropic, Google Health, WHO, WIPO, and the Gates Foundation, emphasizing the interconnected ecosystem required to scale AI-driven healthcare solutions. Discussions focused on critical foundational issues, particularly the role of intellectual property (IP) in supporting AI innovation while ensuring equitable access, and the centrality of ethics and governance from design to deployment. The speakers advocated for 'ethics by design', robust national regulations, and harmonized policy frameworks to translate high-level principles into actionable practice. The panel underscored the necessity of partnership, open innovation, legal certainty for investors, and capacity building to ensure AI technologies are equitable, scalable, and benefit both current and future generations.
- Panelists represent a cross-section of the AI health ecosystem, including WHO, ICMR, Wellcome Trust, Anthropic, Google Health, WIPO, and the Gates Foundation.
- WHO has established an international expert group on AI ethics and governance, and released widely used guidance (including for LLMs) since 2021.
- WIPO emphasizes that IP rights (patents, copyrights) are not barriers but key incentives for investment and knowledge dissemination in AI health innovations.
- IP systems, when well-designed, can foster both innovation and access, countering industry fears that IP impedes progress.
- Ethics must be embedded from the design phase ("ethics by design") and not treated as an afterthought; WHO offers online trainings for AI developers.
- There is an international consensus on ethical principles, but the challenge lies in translating these into national regulation and practical implementation.
- Partnership between academia, industry, government, and global agencies is necessary to scale trustworthy and effective AI health solutions.
- Open innovation, legal certainty, and ethical rigor are crucial to ensure benefits are equitably shared across populations.
Health AI and Energy AI Compendium
The session at the India AI Impact Summit 2026 focused on the transformative applications of artificial intelligence in enhancing accessibility for persons with disabilities, especially in diagnosis, rehabilitation, and personalized education. Key partnerships among institutions like IIIT Bangalore, Indian Emission, and the Changing Foundation were highlighted, showcasing collaborative efforts to leverage AI for orthosis, prosthetic development, and early diagnosis of neurodevelopmental disorders such as autism and ADHD. AI-driven solutions, including mobile screening apps and e-learning platforms like ESA, are enabling earlier intervention—reducing diagnosis ages to as young as 18-24 months, particularly critical during neuroplastic brain development. The discussion also stressed ethical, sociocultural, and data privacy considerations crucial for software development, especially in the context of accessibility. The session culminated in the launch of the sixth thematic AI impact compendium, recognizing contributing partners, authors, and dignitaries, and making digital casebooks available for public engagement.
- Major emphasis on AI for accessibility, especially in orthosis and prosthesis for people with disabilities.
- AI and machine learning algorithms are now used for analyzing behavioral patterns, speech, eye tracking, and facial expressions for early disability diagnosis.
- Mobile applications are making screening tools accessible to parents and healthcare workers in remote areas.
- Early diagnosis of neurodevelopmental disorders (e.g., autism, ADHD) now possible at 18-24 months—down from 4-5 years.
- National institutes are developing AI tools with partners like CDC and Dutch organizations for early childhood disability detection.
- Platforms like ESA are providing adaptable e-learning for children with autism and mild intellectual disabilities.
- Emphasis on ethical considerations, sociocultural sensitivity, informed consent, and data privacy in AI tool development.
- Launched six thematic AI impact compendiums with case studies on AI's real-world applications; digital copies now available online.
- Physical copies can be collected from the Ministry of Electronics and Information Technology post-summit.
- Acknowledgement of collaborative contributions from organizations including IIIT Bangalore, Indian Emission, Changing Foundation, and many others.
India’s Intelligence Infrastructure for Sovereign AI
The session featured an insightful discussion on the adoption and implementation of agentic and generative AI platforms in Indian enterprises, particularly public sector organizations. Co-founders of Fluid AI opened with reflections on the evolution of AI and their journey, including early wins and collaborations with global enterprises. The panel, comprising technical leaders from Nvidia, HPCL, and NABARD, dissected the rationale for the emerging trend of sovereign, on-premise AI deployments over traditional cloud setups. Key motivations included regulatory compliance (e.g., DPDP Act), data sovereignty, strategic independence from hyperscalers, and cost management. Nvidia highlighted the proliferation of open-source models, with a shift towards greater transparency and customizability, offering both model weights and training recipes to enterprises. Public sector executives described the organizational and budgetary hurdles in pioneering AI adoption, underlining the need for cautious but bold experimentation. The interaction emphasized a paradigm shift in problem-solving approaches, the acceleration of open-model innovation, and a cautious yet determined advance toward secure, localized AI in critical sectors.
- Fluid AI shared their journey, referencing their 2012 TechCrunch Disrupt victory and work with major global and Indian enterprises.
- A significant gap exists between consumer (66%) and enterprise (5%) perceptions of generative AI value, based on MIT and McKinsey research.
- HPCL and NABARD have adopted agentic AI platforms on-premise, prioritizing data sovereignty and regulatory compliance.
- NABARD plans to deploy about 20 new generative AI use cases, all on Nvidia hardware, citing compliance needs under India's DPDP Act and evolving statutory requirements.
- Strategic independence from hyperscalers was identified as a primary driver for choosing on-premise over cloud for critical national infrastructure.
- Nvidia is advancing open (and open-source) AI, providing both model weights and reproducible training pipelines (e.g., Neotron 3 series), allowing enterprises deep customization.
- The frequency of high-quality open-model releases is accelerating, allowing enterprises to tailor solutions more flexibly.
- Cost control and budgeting advantages for on-premise deployments were highlighted, especially as token-based billing in cloud environments can be unpredictable.
- Panelists unanimously observed that early-stage AI investment and internal advocacy are major hurdles in large, regulated organizations.
AI and Governance: Finding the Right Balance for Innovation
The session at the India AI Impact Summit 2026 brought together key representatives from TechUK, the Alan Turing Institute, Microsoft, Holistic AI, and the British Standards Institution (BSI) to discuss AI assurance—encompassing trust, standards, and governance. The conversation underscored how AI assurance, a concept once discussed mainly in niche circles, has now become central to both governmental policy (with the UK, Europe, and Singapore leading) and industry practice (with companies like JP Morgan adopting these frameworks). The discussions outlined the significant risks posed by agentic (autonomous) AI systems—such as data leakage and security gaps—and emphasized the growing proliferation of standards and the need for effective, evidence-based AI testing. Panelists identified financial services, healthcare, and critical infrastructure as sectors most advanced in AI assurance due to regulatory pressure and risk exposure. The session highlighted strategic opportunities for cross-border cooperation between the UK and India in setting standards and assurance practices, emphasizing collaboration, practical implementation, sector-specific approaches, and the importance of continuously evolving benchmarks and audits. The key takeaway was that robust AI assurance, underpinned by appropriate standards, is foundational to building public trust and enabling safe and widespread adoption of AI technologies.
- AI assurance and standards are now mainstream in national and industry governance, cited as priorities in the UK, Europe, and Singapore.
- The spread of agentic AI (autonomous agents) introduces new privacy, security, and reliability risks, requiring stronger assurance mechanisms.
- ML Commons AI lluminate, the NIST AI Risk Management Framework, and multiple ISO/BSI standards are shaping global benchmarks for AI assurance.
- Sectors leading in AI assurance include financial services, healthcare, critical infrastructure, and defense—driven by regulatory and reputational concerns.
- A clear call was made for collaboration between the UK and India on AI standards, practices, and frameworks.
- Effective assurance depends on having robust, living AI use case inventories and setting risk-appetite as a governance backbone.
- The need for sector-specific and evidence-based testing, as well as auditable systems, was highlighted for practical, trustworthy AI deployment.
- Holistic AI and Microsoft shared their approaches to end-to-end governance, continuous monitoring, and implementing responsible AI at scale.
AI DPI Sandbox: Co-Creating the Future
The session at the India AI Impact Summit 2026, moderated by Sushant Kumar (Founder and CEO of Kalpa Impact), emphasized India’s leadership in integrating artificial intelligence (AI) into digital public infrastructure (DPI). The conversation focused on the transformative potential and the challenges of embedding AI into DPI systems at population scale, highlighting the necessity for trust, inclusive governance, and coordinated experimentation. Shrimati Kavita Bhagat from the Ministry of Electronics and Information Technology outlined the crucial role of regulatory and policy sandboxes for responsible, safe, and iterative deployment of AI-powered DPIs. She stressed that DPI has enabled transformational public service delivery (such as Aadhaar, digital payments) and private sector innovation in India, and that AI integration can further enable proactive, adaptive, and efficient government services. However, this transition introduces new risks around bias, opacity, data governance, and accountability. Experimentation—through sandboxes—was repeatedly underscored as an operational necessity, not a luxury, for risk mitigation, transparency, and collaborative policy innovation. Lorraine from the Datasphere Initiative launched a landmark report, “Sandboxes for DPI: Co-creating the Blocks of Digital Trust,” which provides the world’s first comprehensive mapping and framework for DPI sandboxes, distinguishing between regulatory, operational, and hybrid approaches. The session concluded with an imperative: as DPI becomes more reliant on AI, ongoing, structured, and multi-stakeholder experimentation is essential to shape a trustworthy AI ecosystem that strengthens democracy and enhances public value.
- India’s DPI (like Aadhaar, Unified Payments, and India Stack) has demonstrated the viability of large-scale, inclusive digital systems.
- Government of India is actively integrating AI into DPI, aiming for proactive, AI-powered public service delivery.
- Key risks with AI in DPI: systematic bias, opacity, data governance issues, accountability and scalability of challenges.
- Experimentation with sandboxes—regulatory, policy, and technological—is asserted as a governance necessity for responsible AI adoption.
- Three main functions of sandboxes: controlled environment for real data testing, risk anticipation and mitigation (e.g., algorithmic bias), and enabling iterative governance via stakeholder feedback.
- India launched an AI governance framework and is seeking input on building robust risk assessment methodologies atop it.
- Lorraine (Datasphere Initiative) announced the launch of the first-ever report globally on DPI sandboxes, identifying 16 pioneering initiatives worldwide.
- DPI sandboxes classified into regulatory, operational, and hybrid, and defined formally for the first time.
- Report findings: feedback loops, institutional learning, and hybrid sandbox models are essential for DPI success.
- Policy recommendation: treat DPI experimentation and sandboxes as ongoing institutional capabilities, not ad hoc pilots.
- The summit serves as the first major convening in the global south on the intersection of AI and DPI, confirming India’s leadership role.
Democratising AI Access: Data, Governance, and Market Design
The session at the India AI Impact Summit 2026 provided an in-depth overview of the evolution and impact of the Hiroshima AI process, launched under Japan’s leadership of the G7 in 2023. Anchored in the adoption and operationalization of the OECD AI principles, discussions focused on addressing persistent challenges in translating high-level principles into actionable, standardized, and comparable practices. Central to this is the introduction and uptake of the Hiroshima International Code of Conduct for Advanced AI Developers and the Hiroshima Reporting Framework, the first voluntary international mechanism for organizations to publicly demonstrate their AI risk management and accountability practices. With submissions from 25 organizations across nine countries—including India’s Infosys as a pioneer—the framework is set for major updates in its upcoming 2.0 version, integrating data comparability, direct links to OECD’s AI tools catalog, and enhanced support for deployers alongside developers. Insights from both government and private sector leaders underscored the urgency of capacity building around AI governance and transparency, promoting both shared language and understanding amid diverse national approaches. The session also highlighted the value of the reporting process for fostering internal and ecosystem-wide improvements in risk assessment, as brought out by Microsoft’s experience, and outlined plans for collaborative development and broad stakeholder engagement in upcoming framework iterations.
- The Hiroshima AI process, initiated during Japan's 2023 G7 leadership, aims to operationalize and standardize OECD AI principles for transparency and accountability.
- The Hiroshima International Code of Conduct for Advanced AI Developers establishes governmental expectations throughout the AI lifecycle.
- The Hiroshima Reporting Framework is the first voluntary, international, and comparable mechanism for organizations to report how they manage AI risk and implement accountability.
- To date, 25 organizations from 9 countries have submitted public reports, including Infosys—the first Indian company to participate.
- The upcoming Reporting Framework 2.0 will provide more comparable and aggregable data, is integrated with the OECD.AI tool catalog (over 700 AI governance tools and metrics), and will better support diverse types of organizations (developers and deployers alike).
- Integration of the reporting interface with the OECD tool catalog allows easy discovery and sharing of AI best practices, benefitting especially smaller and less-resourced organizations.
- Work is ongoing to tailor the framework to evolving technologies and roles within the AI lifecycle; a pilot is planned for March, and release is anticipated in Q2 2026.
- The process has highlighted the need for increased capacity-building on AI governance for both public policymakers and general users to improve transparency and informed policy.
- Microsoft and others emphasized that the reporting process strengthens internal risk management practices and deepens sector-wide shared understanding around trustworthy AI.
- Continued collaboration is planned, with public information sessions and active solicitation of stakeholder feedback for iterative refinement of the framework.
Inclusive AI for Persons with Disabilities
The session at the India AI Impact Summit 2026 focused on the critical intersection of artificial intelligence (AI) and disability inclusion, highlighting a transformative shift from disability being a marginalized issue to becoming a government and societal imperative in India. Speakers from the Royal Society (UK), the Department of Empowerment of Persons with Disabilities (India), and civil society, emphasized the need to design, deploy, and scale AI-driven assistive technologies that are accessible, affordable, and inclusive for India's estimated 200 million people with disabilities—a figure starkly higher than current official counts. The discussions underscored the risks of algorithmic exclusion, called for universally enforced digital accessibility standards in both public and private sectors, and showcased ongoing collaborations with startups and national institutes advancing AI-enhanced assistive tools. The Royal Society presented global perspectives, stressing user-centered co-design, improved disability data collection, affordable digital infrastructure (including recognition of smartphones as assistive tech), and empowering disabled communities as co-creators. The personal stories shared further emphasized the necessity of embedding real-world user experiences in technology development. The session set the stage for multistakeholder engagement and policy action to realize AI-enabled inclusion at scale across India.
- AI and disability inclusion is now a government imperative in India, not just a niche concern.
- Conservative WHO-based estimates suggest 200 million Indians live with significant disabilities versus the severely undercounted 26.8 million in the 2011 census.
- Only 3% of people in low-income countries access assistive products, compared to 80-90% in high-income nations; India’s rural access remains critically low.
- WHO projects 3.5 billion people globally will need assistive technology by 2050.
- Risks of algorithmic exclusion highlighted, e.g., voice recognition failing disrupted speech, hiring algorithms biasing against neurodivergence.
- India moving towards mandatory, non-negotiable technical accessibility standards across government and private ICT products.
- Strategic partnerships in place (e.g., with Assistive Tech Foundation, Bangalore), and national institutes are leveraging AI for culturally-relevant tools.
- Innovative startups like Tinker Labs (Annie) and Tester Labs (Kibo) are producing commercially available assistive products.
- Digital public infrastructure is enabling disability access (e.g., rural users accessing services via voice or real-time captioning in local languages).
- Affordability remains a barrier: UK assistive tech can cost thousands of pounds annually, while India seeks solutions under ₹5,000 (approx. $60).
- Royal Society report calls for: 1) user-centric co-design, 2) high-quality disability data based on functional challenges, 3) recognition of smartphones as assistive tools.
- Personal narratives from end-users reaffirm: true accessibility requires disabled people in the design process, and seamless tech integration into daily life.
- The session featured a multi-stakeholder panel discussion to develop actionable paths forward for AI-driven disability inclusion.
Leading Through AI Transitions: Technology, Energy, and Security
The session at the India AI Impact Summit 2026 explored how India is fostering a grassroots, bottom-up approach to AI and innovation, emphasizing inclusivity from 'school to space' through the expansion of school labs and incubators, including social impact-oriented efforts in smaller towns. Key Indian stakeholders described a multi-layered ecosystem approach spanning startups, incubators, and integration with government ministries—including the Ministry of Defense—to ensure that new technologies have tangible, widespread benefits. Drawing on international perspectives, Israeli representatives outlined a sector-specific, risk-based regulatory approach designed to balance safety, trust, and innovation in a small, export-oriented market, while Australian experts echoed the value of measured, trust-building governance as an enabler of firm-level AI adoption. Across the panel, trust and public involvement emerged as core themes, with India emphasizing public engagement, inclusivity, and transparency in policymaking as critical to public trust—especially as AI adoption accelerates in sensitive sectors like defense and agriculture. Real-world case studies highlighted India's rapid scaling, such as the addition of 50,000 new school labs and AI-powered land mapping, and cited the country’s ability to orchestrate national-scale inclusivity as demonstrated in events like the G20 and pandemic vaccination program.
- India is expanding its network of AI-focused labs from 10,000 to 60,000 schools to foster early-stage innovation.
- Indian incubators exceed 100 in number, with over 20 located in small towns and cities focused on social, not just financial, impact.
- India is integrating startups and new technologies into government sectors, notably creating a new Ministry of Defense entity to accelerate adoption.
- Policy efforts are aimed at connecting multiple ministries and state governments into a cohesive innovation ecosystem.
- Israel is pursuing sector-specific, risk-based AI regulation rather than a blanket horizontal approach, balancing safety and innovation.
- Recently, Israel established an AI directorate to harmonize sectoral regulations and promote trust in AI.
- Australia is adopting technology-neutral, sector-driven governance, with an emphasis on intent, enforcement, and enabling innovation rather than stifling it.
- Building public trust is at the center of India’s AI and innovation strategies, with an emphasis on scalability and inclusivity (e.g., G20 outreach, vaccine program).
- India has increased its public defense budget by more than 15%, linking AI adoption and resilience building with increased public spending.
- Use cases such as AI-powered land mapping and agricultural drones highlight tangible grassroots impact and public-benefit orientation of Indian AI initiatives.
Asia AI Diplomacy: Governing AI in a Fragmented World
The session at the India AI Impact Summit 2026, organized by Safety Asia, focused on the urgent need for cross-border governance and crisis diplomacy mechanisms to address the rapid and uncertain risks posed by advanced AI systems, particularly from Asian and global south perspectives. The discussion highlighted realistic AI-driven crisis scenarios, such as AI-triggered financial cascades, election-disrupting deepfakes, and autonomous system incidents affecting international infrastructure, to illustrate the inadequacy of current international frameworks in managing incidents that unfold at 'machine speed.' Professor Stuart Russell underscored the fallacies in claims that AI cannot be regulated due to its pace and general-purpose nature, drawing parallels to long-standing safety and liability regulations in other sectors, and advocated for robust, risk-based regulatory approaches including behavioral red lines and restored liability. The session framed the necessity of defining acceptable risk thresholds for AI, including existential risks, and criticized the technology sector's resistance to demonstrable risk management. The panel, comprising global AI governance leaders, initiated a collaborative process to identify practical coordination channels, verification mechanisms, and authority structures needed for rapid international response to AI crises.
- Safety Asia is spearheading AI crisis diplomacy initiatives focused on governance and cross-border coordination from Asian and global south perspectives.
- Three acute AI crisis scenarios were detailed: (1) AI-driven financial market disruptions, (2) deepfake media destabilizing elections or peace negotiations, (3) cross-jurisdictional incidents involving autonomous infrastructure systems.
- Current international diplomatic and legal frameworks are ill-equipped for rapid, uncertain AI incidents that traverse jurisdictions and unfold in seconds or minutes.
- Professor Stuart Russell dismantled myths that AI is unregulatable due to its speed or general-purpose nature, highlighting existing analogues in medicine, electricity, and nuclear power regulation.
- Advocated for risk-based regulatory models including behavioral red lines and the reintroduction of liability for tech companies, which is broadly disclaimed in current industry agreements.
- Highlighted that current tech industry leaders estimate existential risk from AGI between 10-50% and called for regulation to demonstrate reduction of human extinction risk to less than 1 in 100 million per year.
- The panel featured global AI policy leaders from Singapore, Japan, Tajikistan, Taiwan, and Denmark, who were engaged to explore practicalities of initiating cross-border coordination during AI crises.
- The session marked the start of a collaborative process to develop concrete frameworks for rapid, effective, and internationally coordinated responses to AI-driven incidents.
AI in Modern Drug Discovery: From Genes to Clinical Trials | India AI Impact Summit 2026
This session at the India AI Impact Summit 2026 explored the democratization of AI through open standards, the challenges of AI security and identity, the critical role of data access and quality, and infrastructure pressures arising from AI's rapid expansion. Leading voices from open-source organizations and enterprise discussed how interoperability and collaboration—exemplified by models like Docker, Kubernetes, and emerging AI-specific protocols such as MCP (Model Collaboration Protocol)—can catalyze more equitable AI innovation, support diverse business opportunities, and ensure secure and accountable agentic AI. The panel also addressed India's data center expansion needs, emphasizing the shift toward both mega-scale 'AI factories' and distributed, edge data centers to support widespread, resilient AI access. The importance of open data pipelines and collaborative model improvement was underlined as vital to drive AI advancements, while policy, security, and sustainability concerns were discussed as critical enablers and constraints for India's AI ambitions.
- Open standards and open source are positioned as key catalysts to democratize AI and break the dominance of tech giants, with emphasis on interoperability enabling builders from diverse backgrounds.
- Reference to historical success in the cloud native ecosystem (Docker, Kubernetes, Open Container Initiative) as a model for AI's transformation, allowing rapid business creation and ecosystem growth.
- Anthropic's Model Collaboration Protocol (MCP) highlighted as a significant step by a major industry player toward unifying and standardizing agentic AI interactions, with an upcoming foundation and public video announcement.
- Security and identity in AI—especially in agent-driven, multi-tool call workflows—require fine-grained authorization (such as OAuth 2.1) and persistent traceability, integrating legacy security protocols into open AI architectures.
- Open data collaboration: Red Hat points to upstream projects and platforms like Instruct Lab to improve the quality and accessibility of AI training data, overcoming proprietary silos and privacy/cost barriers.
- India faces infrastructure challenges as AI data centers escalate in power requirements—from 25 kW/rack (H200) to projections of 2 MW/rack (Nvidia, 2027)—posing questions regarding power, water, and local impact in tier 1–3 cities.
- Emergence of a new operational metric, 'tokens per kilowatt,' signals a paradigm shift in measuring the efficiency and scale of AI computation.
- Report on AI openness to be published across the summit, with additional panels scheduled on topics like resilience and sovereignty.
Trust in the Age of Synthetic Media
The panel session at the India AI Impact Summit 2026, titled 'Building Trust in the Age of Synthetic Media', convened key stakeholders from international technology companies, global policy organizations, and Indian government regulators to discuss solutions to challenges posed by synthetic media. Central to the discussion was the C2PA (Coalition for Content Provenance and Authenticity) – an open, global technical standard developed to provide content provenance, transparency, and cryptographic proof of digital media origins. Representatives from ITI (Information Technology Industry Council), Google, Adobe, and India's Ministry of Electronics and Information Technology (MeitY) acknowledged the critical role of trust and transparency as AI-generated content proliferates, highlighting India's leadership opportunity as it hosts its first global AI summit in the Global South. Panelists emphasized that, while C2PA is not a silver bullet, it is a foundational and scalable step towards cross-industry and global solutions. Indian regulators outlined recent legislative efforts—in privacy, cyber law, and IT rules—aimed at balancing citizen empowerment, business facilitation, and digital trust. The overall consensus was that multi-stakeholder, public-private approaches, and open standards are essential for trustworthy AI-driven digital ecosystems, both in India and globally.
- Panel focused on building trust amid the proliferation of synthetic media, with transparency and provenance as core themes.
- The C2PA (Coalition for Content Provenance and Authenticity) standard, founded by Adobe, Microsoft, Google, and others, provides cryptographic proof and metadata for digital content, helping ensure authenticity.
- India’s new laws—Digital Personal Data Protection Act, updated IT Act, and regulatory moves on platforms and infrastructure—center on digital trust and were highlighted as recent policy shifts.
- Google announced longstanding and new tools to aid image provenance, including SynthID (to identify AI-generated content) and content credentials, and emphasized the importance of digital literacy and embedding provenance metadata.
- ITI advocated for multi-method approaches (C2PA, watermarking, human review), recognized C2PA is not perfect but a critical starting point, and underscored the need for global harmonization over regulatory patchwork.
- MeitY stressed the centrality of citizen empowerment and ease of living in regulatory development, highlighting extensive consultation with public and industry, and the ambition to set legislative standards globally.
- All panelists agreed that industry-government collaboration and open, interoperable standards are essential to scalable, effective solutions for synthetic media and AI transparency.
AI, Governments & Business: Building Nations Through Social Good
This session at the India AI Impact Summit 2026 focused on the challenges and requirements of AI safety and cross-compliance between India and the European Union. The workshop, led by experts specializing in AI ethics, compliance, and law, highlighted the differences in regulatory approaches: while the EU operates under one of the world’s strictest frameworks with its AI Act, India currently prioritizes innovation over regulation, lacking a dedicated AI Act. The discussion detailed the EU’s risk-based AI system classification and examined key safety parameters such as human oversight, data quality, transparency, technical documentation, and cybersecurity. Cross-border compliance was emphasized, with companies urged to satisfy both Indian standards (as they evolve) and existing EU regulations, especially when handling high-risk AI applications or personal data. The speakers stressed that effective AI governance in an interconnected digital market requires building systems that are robust, transparent, regularly monitored post-deployment, and designed to respect differing legal values. The session underscored the need for ongoing collaboration and adaptation as India moves toward formal AI legislation.
- The EU AI Act is one of the world’s strictest AI regulatory frameworks, categorizing systems based on risk (high, limited, minimal).
- India, with a population of 1.4 billion, currently lacks a dedicated AI Act, focusing more on innovation with possible new regulations forthcoming.
- Cross-compliance is mandatory: companies building AI in or for Europe must adhere to the EU AI Act, regardless of their location; similar expectations will apply as India drafts its own regulations.
- Key parameters for safe AI include: risk classification, continuous risk management, human oversight, technical documentation, transparency to users, data quality, accuracy, robustness, cybersecurity, and post-market monitoring.
- Bias in datasets (e.g., health data) poses major risks; continuous authentication and diverse data are critical for fairness and safety.
- Cybersecurity risks such as data poisoning must be addressed through robust security standards.
- Post-market monitoring is essential to ensure ongoing performance, safety, and adaptation to real-world feedback.
- Companies operating across Indian and European markets must design AI systems that satisfy the strictest applicable safety and compliance requirements.
- Upcoming Indian AI regulation may introduce new compliance points and safety parameters reflective of the country's unique constitutional and cultural values.
- EU-India (and other global) collaborations and multi-stakeholder workshops are essential to harmonize AI standards and facilitate innovation.
Unlocking Opportunities Through AI: Rural Women Lead the Change | India AI Impact Summit 2026
The India AI Impact Summit 2026 featured the launch of the Tata AI Sakhi Immersion Program, a landmark initiative aimed at elevating AI literacy among rural women at a national scale. The event, attended by over 1,600 women from across India, saw participation and support from major Tata Group companies, including TCS, Tata Steel, Tata Power, and Tata Chemicals, under their CSR initiatives. Dignitaries such as Smt. Smriti Irani and key Tata leaders lauded this push for technological inclusion. The program emphasizes learning AI in local languages via mobile phones, bypassing barriers like English and laptops, to enable women not just as learners but as digital entrepreneurs and community changemakers. Testimonies from women beneficiaries highlighted the real-life empowerment and increased livelihoods enabled by previous Tata digital programs, with the integration of AI now promising to amplify these outcomes across entrepreneurship, government benefits access, documentation, and market connection. The event underscored a holistic vision of progress where People, Planet, and Progress intersect through purposeful tech in India’s grassroots.
- Tata Group launched the 'Tata AI Sakhi Immersion Program' targeting AI literacy for rural women at an all-India scale.
- Over 1,600 women participated live at the summit, representing Tata's digital inclusion across India.
- The program enables learning and applying AI in local languages directly on mobile phones—no English or laptops needed.
- Mentors guide participants in using AI for product design, marketing, understanding government schemes, translations, and applications.
- Tata’s CSR arms—TCS, Tata Steel, Tata Power, Tata Chemicals—have cumulatively empowered tens of thousands of women through digital entrepreneurship and literacy initiatives.
- Specific impact stats: Over 10,866 women have become entrepreneurs, TCS's 'Digital Didis' earn ₹1,000–₹10,000 per month, and Tata Steel Foundation's 'Disha' program reached 47,000+ women digitally with over ₹90 crore in direct government benefits.
- Programs like Manasi Plus in healthcare and Okhai in handicrafts have provided income and recognition to rural women and artisans, e.g., 379 tribal artisans earned ₹1+ crore since 2022.
- The integration of AI is set to enhance the reach and impact of these CSR efforts by improving women's access to services, entrepreneurial resources, and new markets.
- Personal stories (e.g., Tanuppriya Kesari from Jharkhand) illustrate transformative change—from unemployment to self-reliance and local recognition achieved through Tata's digital initiatives.
- The summit aligns with the themes of People (empowerment), Planet (sustainability), and Progress (inclusive growth) through technology.
- High-profile dignitaries, notably Smt. Smriti Irani, lent their presence and support, emphasizing the national significance of the initiative.
AI for our oceans of tomorrow: Data, Models and Governance
The session at the India AI Impact Summit 2026 focused on the transformative role of artificial intelligence (AI) in advancing India's blue economy, with specific emphasis on ocean sciences, sustainable resource management, and digital public infrastructure. Distinguished panelists and keynote speakers, including directors from the Indian Meteorological Department and the Norwegian Ambassador to India, highlighted national and international collaborations, policy initiatives such as the Deep Ocean Mission, and the critical need for data-driven approaches to tackle climate change and enhance economic growth. Key achievements were presented, notably India's advanced ocean observation and predictive modeling systems, improved cyclone forecasting, and near-zero loss of life from recent cyclones. Both government and international partners stressed the importance of open data, public-private partnerships, responsible AI governance, and interoperable digital platforms to maximize the benefits of ocean data, improve livelihoods, and build coastal resilience while minimizing environmental risks. Norway and India reaffirmed a commitment to joint innovation in digital public infrastructure and sustainable blue economy practices, with AI as a central pillar.
- India's Ministry of Earth Sciences, with strategic support from Ernst & Young, is prioritizing the development of the blue economy to meet national economic targets.
- The Deep Ocean Mission is a flagship government initiative focusing on ocean exploration, resource utilization, climate impact assessment, and resilience-building.
- A major project, 'Samudrayaan,' aims to send humans to a depth of 6,000 meters to enhance understanding of ocean sciences.
- India has achieved almost zero cyclone-related deaths in recent years due to improved early warning and forecasting systems.
- The country now utilizes a comprehensive network of satellites, ocean buoys, coastal and island observations, and modeling systems for real-time ocean data.
- There is a shift toward open data policies, enabling broader access to oceanic datasets for industries, academia, and startups.
- Public-private partnerships and international collaborations, notably with Norway, are being encouraged to leverage AI for sustainable ocean governance.
- Norway advocates for open, interoperable, and trusted digital public infrastructure, open-source solutions, and responsible AI in oceanic applications.
- AI is recognized as a catalyst for better forecasting, fisheries management, vessel traffic optimization, and emission reduction across maritime sectors.
- India is positioned as a global leader in digital public infrastructure for oceans, with an ambition to share solutions internationally.
AI-Powered Ports: Transforming Logistics and Operations
The panel discussion at the India AI Impact Summit 2026 focused on reimagining port operations through the integration of artificial intelligence (AI). Organized by the VO Chadam Port Authority and the Institute for Governance Policies and Politics (IGP), the session brought together prominent policymakers, industry leaders, and technology experts to chart a national way forward for AI-powered port modernization. Panelists emphasized that AI should be viewed not merely as a technological upgrade, but as a transformative governance and decision-making tool central to India's economic resilience and global competitiveness. Discussions highlighted the crucial need for underlying digital public infrastructure, standardized processes, and interoperability between ports to overcome current challenges like data fragmentation and operational silos. Recent progress, such as the 'one nation, one port process' standardization initiative, was acknowledged as a major step, while the panel stressed transitioning from mere digitalization to embedding AI-driven, anticipatory, and judgment-based operational models. Real-world port challenges—from trade disruption events to weather unpredictability—were cited as areas where AI's predictive power and integrated decision-making could vastly improve safety and efficiency. The session set the tone for India’s push towards a future where maritime logistics are driven by interconnected, AI-enabled systems that support environmental sustainability, rapid trade movement, and workforce upskilling.
- Panel organized by VO Chadam Port Authority and Institute for Governance Policies and Politics (IGP) focused on 'AI-powered ports: reimagining efficiency and operations.'
- Ports seen as central to India’s economic resilience—95% of Indian trade by volume and 70% by value occurs via maritime routes.
- Consensus that AI is not just a software layer but a profound decision-making and governance capability.
- India’s recent 'one nation, one port process' initiative standardized 800+ disparate port documents and processes into a harmonized structure of ~200 documents and 50–60 core processes, enabling AI integration.
- AI opportunities identified in planning (predictive systems, just-in-time), operations, environment, monitoring, safety, and compliance.
- Key challenges include digital infrastructure gaps, legacy data silos, lack of interoperability, vendor lock-in, need for open-source solutions, human resource upskilling, and questions of accountability when AI makes decisions.
- Distinction made between 'smart ports' (automation and tech-led development) and 'thinking ports' (AI-driven, judgment-based, predictive and cross-system operations).
- Real-world incidents—like container vessel accidents and global supply chain disruptions (e.g., Suez Canal blockage)—demonstrate urgent need for predictive, AI-based solutions.
- Event underscored collective resolve to shape national policy and AI standards for future-ready Indian ports.
AI for Equitable and Resilient Health Systems
The session at the India AI Impact Summit 2026 brought together leading experts from government, industry, and global organizations to discuss the transformative impact of artificial intelligence (AI) on India's healthcare landscape. Key discussions focused on the successful large-scale adoption of AI-powered health screening tools (such as for TB and breast cancer), the importance of embedding AI across the care continuum (from imaging to diagnosis and decision support), and the necessity of robust, minimalistic digital public infrastructure to enable innovation at population scale. The session highlighted that AI in healthcare is more likely to create and transform jobs rather than lead to mass displacement. It also emphasized the critical need for strong clinical validation, practical skilling of healthcare workers, and adaptable governance frameworks for effective and equitable AI deployment. India's unique integration of digital public infrastructure, vibrant startup ecosystem, and a population-scale approach positions it as both a global leader and learning platform for AI-enabled universal health coverage, with broader lessons for addressing resource inequities both within the country and worldwide.
- AI-powered TB screening tools are in use at scale in India, contributing to increased case detection and better health outcomes, addressing India's high TB burden.
- Indian startups such as Miram.ai have impacted 400,000 women through AI-enabled breast cancer screening using thermal imaging, improving early detection at the community level.
- AI systems like Cure.ai (for chest X-rays) and AI solutions for diabetic retinopathy are now employed in both primary care and remote settings, with support for low-technology environments.
- Emphasis on AI transforming and creating healthcare jobs across the value chain rather than causing widespread job loss.
- Strong clinical validation—both in hospitals and the field—was cited as essential for AI tools' success at scale.
- AI innovations are most successful when embedded across the complete care continuum (from early detection to follow-up), not as isolated pilots.
- Digital public infrastructure (like India’s ABDM/National Health Stack and Aadhaar) provides the scalable, interoperable foundation for AI deployment, enabling mass innovation and democratizing access.
- Standardization in medical imaging and remote expert guidance (e.g., virtual cockpits for radiology) by companies like Siemens are narrowing the rural-urban health divide.
- Lessons from digital infrastructure (such as UIDAI/Aadhaar) show that solutions must be open, minimal, empowering, and enable a broad ecosystem for innovation and contextualization.
- Panelists agreed on the need for tailored skilling, supportive financing and governance, and a strong commitment to equity to ensure AI in healthcare closes rather than widens gaps.
Ministry of Education Pushing the Frontier of Al in India
The session at the India AI Impact Summit 2026 highlighted the transformative initiatives and strategic investments being made by the Indian government, academia, and industry to accelerate the impact of artificial intelligence (AI) in education and other critical sectors. Key announcements included the development of digital public infrastructure (DPI) such as the Bhat Edu AI stack, facilitated through Centers of Excellence (AICOEs), aiming to create an inclusive, scalable, and sovereign AI ecosystem tailored to India's diverse needs. Panelists emphasized India's leadership in consumer-native AI startups, underscored the importance of developing indigenous AI models that reflect Indian cultural and linguistic diversity, and recognized that data curation, affordability, and robust resource integration (including talent and compute) are critical to sustainable innovation. The session stressed the urgent need for public-private partnerships and the integration of AI tools into national platforms to democratize quality education, bridge the digital divide, and establish India’s blueprint for global AI leadership.
- The Ministry of Education, through AI Centers of Excellence (AICOEs), is heavily investing in digital public infrastructure (DPI) and platforms to spur innovation across domains, with a strong focus on education.
- India now leads the US in the number of funded consumer-native AI internet startups, with more than 1,000 expected by the end of the year.
- India’s internet ecosystem boasts 900 million connected users, with 850 million daily active users and seven hours of average daily use, creating a vast market for consumer AI solutions.
- Despite billions invested in edtech, fewer than 10% of India’s 290 million students have paid for online education, pointing to gaps in affordability and access.
- AI-powered education platforms are seeing exponential growth: The fastest-growing Indian-native ed AI company achieved 100 crores in revenue within 6-7 months and is projected to reach 1,000 crores in 18-24 months.
- The Bhat Edu AI stack is being designed as a digital public infrastructure analogous to Aadhaar (identity) and UPI (payments), intended as a global blueprint for scalable, inclusive AI in education.
- The new AI-driven education stack aims to support 22 official languages, enable personalized learning, integrate Indian pedagogical values, and address special needs—significantly benefiting holistic student development.
- India is targeting a gross enrollment ratio (GER) in higher education of 50% by 2035, with AI playing a key role.
- Building indigenous LLMs (large language models) and vertical AI models for the Indian context is seen as essential for sovereignty, relevance, and opportunity in sectors like education, agriculture, and healthcare.
- Data is identified as the most critical infrastructure for AI success: India’s long written history and unique datasets represent a strategic advantage.
- National digital learning platforms (like Sati) will require integration of AI, low-bandwidth models, and offline capabilities to ensure AI-driven education reaches every part of India, countering the risk of digital divide 2.0.
- Collaboration among government, academia, startups, and industry is highlighted as vital for building scalable, sustainable, and inclusive AI platforms.
The GenAI Talent Imperative: Building the Global Workforce
The session at the India AI Impact Summit 2026 opened with a multi-perspective panel discussing the evolving nature of workforce readiness in the age of AI, especially with the rise of generative AI (GenAI) technologies. Moderated by Dr. Vijay Swaminathan, the discussion emphasized that workforce readiness is no longer limited to conventional upskilling or certification, but encompasses AI fluency, the ability to redefine job tasks, and governance-oriented critical thinking. Panelists from leading organizations such as Microsoft, ISACA, and various start-ups underscored the transformation from tool-centric learning to a more holistic approach where adaptability, explainability, human judgment, and a culture of curiosity are essential to navigate rapid industry changes. A strong consensus emerged on the vital role of soft skills, contextual application of AI solutions, and the democratization of responsibility for AI outcomes across all levels within organizations. The global perspective highlighted India's growing prominence and proactivity in embracing AI frameworks, while also acknowledging the ongoing need for adaptable, defensible, and ethical integration of AI in workflows. Ultimately, the panelists agreed that the future workforce must be prepared not just technically, but with the resilience and judgment to thrive amid uncertainty and to harness AI collaboratively and responsibly.
- Workforce readiness in AI requires three layers: (1) AI tool fluency, (2) redefining job tasks and workflows to leverage AI, and (3) governance capabilities—including critical judgment and validation of AI outputs.
- Panelists stressed that workforce readiness is context-dependent, varying by role, organization, and industry, and cannot be addressed via a one-size-fits-all approach.
- Soft skills (such as creativity, curiosity, adaptability, and willingness to experiment or fail) are regaining importance alongside technical skills.
- India is positioning itself as a leader in AI readiness with strong government, industry, and education sector collaboration and emerging governance frameworks.
- Organizations should foster a culture of continuous learning, rapid knowledge absorption, and appropriate application of AI tools rather than mere tool familiarity.
- Explainability, defensibility, and ethical consideration of AI results are crucial; reliance on AI outputs without human oversight poses risk.
- Responsibility for AI governance is shifting from siloed compliance teams to frontline employees, requiring democratized decision-making and accountability.
- Global perspectives indicate that India is aligned with leading international organizations in prioritizing both technical skills and responsible, transparent AI adoption.
Cross-Border AI Collaboration: Research, Startups, and Scale
The session at the India AI Impact Summit 2026 focused on fostering cross-border and multi-sectoral collaboration in applied AI, highlighting the need for collective intelligence, practical interoperability, agency, and trustworthy partnerships. Speakers stressed the importance of building shared frameworks to enable systems from different countries and sectors to interact safely, leveraging practical use cases from Europe, India, and Africa. The agenda prioritized inclusive dialogue among startups, governments, industry, and civil society, with a hands-on World Cafe session designed to build consensus around ecosystems, data sharing, and sovereignty. Case studies from Norway, India, and TCS illustrated the real-world potential of cross-cultural, accessible AI solutions in sectors such as energy, art, and public service. The session will culminate in a collaboratively authored policy brief, aiming to set actionable pathways for future international AI cooperation.
- Emphasis on joint creation of policy through collective intelligence, with all contributors invited to co-author a four-page policy brief as an outcome of the session.
- Adoption of an interactive World Cafe format to ensure inclusive participation from all stakeholders—startups, government, industry, and civil society.
- Key focus on building 'agency, not just access,' allowing local teams to adapt AI systems to their own realities instead of being coordinated by foreign-designed systems.
- Recognition that AI is evolving into a societal coordination layer, requiring robust interoperability standards, cross-border validation, and shared safety principles.
- Risks highlighted: misaligned AI systems across national borders can lead to systemic instability, especially in critical infrastructure sectors.
- Best practice examples: deployment of shared simulation environments, digital sandboxes, and joint laboratories for testing and aligning AI systems.
- Cintf's global collaboration projects cited, including open digital infrastructure initiatives and joint model development in Africa, were presented as templates.
- Advocacy for India joining the European research framework to boost international collaboration.
- Case studies: TCS project in Norway democratizing access to cultural artifacts through AI, and digital and AI interventions at India's Kumbh Mela to enhance accessibility and experience.
- Commitment to context-sensitive, culturally aware AI development, ensuring local acceptability and impact.
- Next steps outlined for maintaining momentum on collaboration and practical implementation of discussed ideas.
AI, Energy, and Finance: Future-Proofing India’s Data Centres
The session at the India AI Impact Summit 2026 explored the deep interdependence between artificial intelligence (AI), energy, and finance—framing these three as a crucial 'trifecta' for the future. Speakers highlighted that the rapid growth of AI is triggering an unprecedented surge in energy demand, especially through hyperscale data centers, which could double global data center electricity consumption by 2030 to levels matching Japan’s entire power usage today. While data center growth is currently focused in the US, China, and Europe, emerging markets, including India, present vast opportunities given their reliable and affordable power. AI itself offers game-changing solutions to energy optimization through grid balancing, improved renewable integration, outage reduction, and infrastructure planning—yet its full potential is restricted by data access, digitization disparities, regulatory barriers, and specialized skills shortages.
Policy and financing innovations were central highlights: India’s current data center capacity stands at 1.7 GW, projected to grow to 8 GW by 2030, fueled by policy incentives such as a draft 20-year tax holiday and dedicated AI missions. States like Uttar Pradesh, Tamil Nadu, and Maharashtra are offering their own incentives. The state-owned REC Limited (RC) is expanding its traditional energy sector financing to include end-to-end funding for AI data centers, leveraging decades of experience and recent forays into infrastructure financing. There is exponential global investment momentum in data centers—projected at $4.2 trillion between 2025–2030—with AI-related companies accounting for three-quarters of the S&P500 market cap increase since 2022. However, ensuring sustainable, resilient, and timely deployment—especially in emerging economies—calls for coordinated action across policy, energy planning, investment, and technology adoption.
- Hyperscale AI data centers now have energy demands capable of powering up to 100,000 households each.
- Global data center electricity consumption is expected to double by 2030—reaching levels equal to Japan’s current consumption.
- Currently, 85% of data center demand is clustered in the US, China, and Europe, but emerging markets are poised for rapid growth if reliable power is ensured.
- AI can help neutralize the emissions impact of data centers by 2035 if optimally applied, according to International Energy Agency estimates.
- Data access, quality, digitization gaps, and skill shortages are major barriers to leveraging AI in the energy sector.
- Concentration of data centers in specific regions is placing acute strain on local electricity grids—e.g., in Virginia, USA, 25% of the state’s electricity is used by data centers.
- Transmission infrastructure lags behind data center construction, risking project delays or stranded assets.
- Data centers could contribute up to 3% of global power sector emissions by 2030; clean energy PPAs and on-site generation are being increased as solutions.
- Global investment in data centers almost doubled from 2022 to 2024, with $4.2 trillion in projected investment between 2025 and 2030.
- AI companies contributed $12 trillion of the $16 trillion S&P500 market cap increase since 2022.
- India's data center capacity stands at 1.7 GW, expected to reach 8 GW by 2030, aided by the proposed 20-year tax holiday and permanent establishment status.
- REC Limited (RC), a state-owned powerhouse in energy finance, is now extending its scope to comprehensively fund AI data centers, leveraging prior success in both energy and infrastructure financing.
- RC has financed 64 GW of renewable and 120 GW of thermal projects, and now aims to be the preferred finance partner for India's AI revolution.
- Indian states are actively offering incentives to attract AI and data center investments; REC’s 10% loan book is now in infrastructure, including new digital highways.
AI at Scale: Building High-Performance Data Centres
This session at the India AI Impact Summit 2026 spotlighted innovative strategies to address the surging energy demand driven by AI-powered data growth, with an emphasis on actionable solutions for resilient, sustainable, and scalable data center infrastructure. Representatives from the National Lab of the Rockies (formerly National Renewable Energy Lab) detailed their extensive research and collaborative projects, including advanced energy systems integration with global partners like India. The lab showcased its unique 'chip-to-grid' approach—optimizing energy usage and grid integration from chip design, facility cooling, and power electronics, to grid-level interactions. Real-world 'living laboratories' test next-generation data center energy systems, cooling, and grid flexibility, leveraging AI-driven modeling and hardware experimentation. The focus was on optimizing load flexibility, integrating robust uninterruptible power systems (UPS), hybrid energy solutions, and utilizing AI to enhance grid operations, cyber-security, and research productivity. These initiatives aim to expedite infrastructure build-outs, balance affordability, reliability, and sustainability, and set scalable standards for the evolving global digital infrastructure.
- The National Lab of the Rockies has broadened its remit from renewables to advanced energy integration for data centers and grid modernization.
- Over 80 countries, including India, have collaborated with the Lab for more than two decades on energy systems innovation.
- The Lab's new 'chip-to-grid' strategy addresses energy challenges holistically—from chip cooling to grid transmission and bulk generation.
- A 10,000 square foot, 10 MW living data center laboratory in Colorado is used for real-time testing of advanced cooling methods, including water-based systems that recycle waste heat.
- Modeling and geospatial analysis enable optimal data center site selection considering price, water, available infrastructure, and local ordinances.
- The Lab supports research in power electronics, thermal management, and introduces new grid-interactive and load-flexible data center operations.
- Key projects include: grid-flexible data centers, rapid UPS response to load changes, advanced load type modeling, and hybrid energy supply for pulsating data center loads.
- AI is used to optimize grid operation, enhance cybersecurity, automate energy management, and accelerate research timelines at reduced cost.
- The Energy Systems Integration Facility and Flatirons Campus provide hardware-in-the-loop and virtual utility simulations up to 10 MW scale for grid/data center research.
- A significant challenge addressed is balancing the rapid infrastructure deployment with competing demands for affordability, reliability, utility profitability, and sustainability.
Driving AI Impact Through Real World Action
The session at the India AI Impact Summit 2026 focused on the NamoShakti breast cancer screening mission, an AI-driven initiative that has become the world's largest breast cancer screening program. Panelists highlighted how the project, conceived at the previous NXT conference, leverages Indian-developed, non-touch, radiation-free thermal imaging technology integrated with AI pattern recognition to provide instant, dignified, and private screenings for women in both urban and rural India. The campaign addresses critical access, awareness, and social stigma barriers that have limited breast cancer detection, with current technologies reaching under 1% of the at-risk population. Boasting mobile van deployments, real-time digital dashboards, and secure, geo-tagged screening, the initiative has already performed over 30,000 screenings in Varanasi and Ambala, with plans to double screening centers. The technology's 95% specificity, portable design, and culturally sensitive delivery model have enabled deep community penetration, supported by ASHA health workers and cancer survivor volunteers, fundamentally shifting India's approach to preventive women's health. NamoShakti exemplifies translating conference-room innovations into scalable, humanitarian impact, moving AI from labs into lives.
- NamoShakti breast cancer screening mission is now the largest such program globally, leveraging AI-driven, non-invasive, radiation-free technology.
- Over 30,000 screenings conducted to date in Varanasi and Ambala using more than 42 dedicated mobile vans, tracked in real-time with digital dashboards.
- Incidence of breast cancer in India: 28% of all female cancers, with more than 2.3 lakh new cases and approximately 90,000 deaths annually.
- Screening rates in India are less than 1% of the target female population, highlighting a huge gap in access to early detection.
- NamoShakti uses Indian-made devices ('Thermalytics' and 'Smile') that are CDSCO, CE marked, and USFDA cleared, emphasizing Make in India credentials.
- Technology enables instant, private, no-touch screenings, generating digital, geo-tagged, and encrypted reports for each participant.
- Portable screening model reaches rural and under-served urban areas, addressing logistical and socio-cultural barriers, especially stigma around breast cancer.
- The AI-enabled system boasts 95% specificity, reducing strain on the larger health system by filtering and funneling high-risk cases efficiently.
- Expansion plans include doubling the number of centers and further integrating advocacy by leveraging ASHA workers and survivor volunteers for community outreach.
- The mission embodies the Summit's philosophy: turning panel discussions into real-world deployment and tracking measurable, societal impact.
AI and the State: Governing Intelligence in Government
The panel at the India AI Impact Summit 2026 brought together influential experts from government, academia, civil society, and industry to discuss the inherent tensions faced by governments as both AI regulators and AI users. Panelists emphasized that governments are held to higher standards than private sector actors due to citizens' lack of choice in interacting with state services. Discussions focused on the challenges of aligning regulatory frameworks with government operational needs, the risk of conflict of interest in countries hosting major AI labs, insufficient government investment in responsible AI infrastructure (including privacy, security, test and evaluation), and the importance of transparency and accountability in state AI deployments. Drawing on UK research, the panel highlighted public skepticism, the need for clear terminology, and the problems that arise from hyped, inadequately evaluated government AI deployments—such as the risk of errors in critical social service applications. The panel advocated for governments to set best practices in testing, evaluation, and assurance, thereby leading both by regulation and demonstration, and cautioned that careless or opaque government AI deployment could erode public trust and democracy itself.
- Governments face a core tension in being both AI regulators and users; they are held to higher standards as citizens cannot opt out of public services.
- Liability and governance gaps in regulatory systems are often magnified for governments, especially in areas like foundation models and agency.
- Countries hosting leading AI labs face conflict of interest: balancing economic/military competitiveness with regulatory oversight.
- There is a critical lack of investment in privacy, security, testing, and responsible AI infrastructure in government AI initiatives.
- GenAI requires new evaluation methodologies distinct from traditional machine learning; current practice is underdeveloped.
- Transparency, accountability, and strong documentation are essential for trustworthy government AI deployments, especially as government use of agents introduces new autonomy and accountability challenges.
- Public trust is at risk in low-trust environments; failures of AI in government (as opposed to the private sector) have broader democratic and societal implications.
- UK research (Learn Fast and Build Things, 32 reports) reveals confusion due to lack of clear terminologies and the use of narrow use cases to justify broader AI investments.
- There is a persistent hype cycle; calls for slowing down and rigorous evaluation are often dismissed as pessimism ('doomerism').
- A specific example in the UK: transcription tools for social workers are beneficial but introduce risks (hallucinations), showing the need for thorough evaluation and human-in-the-loop processes.
Ethical AI as Digital Public Infrastructure
The session, organized by the Digital Empowerment Foundation (DEF) in partnership with Globe Ethics, AI Safety Connect, and supported by UNESCO’s Women for Ethical AI South Asia, focused on the growing risks of algorithmic exclusion as India rapidly deploys AI-enabled Digital Public Infrastructure (DPI) across welfare, healthcare, education, finance, and governance. Panelists stressed that AI-enabled DPI is not a neutral infrastructure—its design, deployment, and data governance shape who is included or left behind in digital society. The session highlighted real-life grassroots challenges, notably that existing public digital services are often out of reach for marginalized communities due to inaccessibility, affordability, lack of local language support, and hidden barriers. Speakers argued that AI systems currently extract data from people but too rarely serve them directly, especially those living outside urban centers. There was a consensus that without intentional, community-centric, and accountable design, AI risks deepening socio-economic divides and rendering those without digital access invisible or powerless. The panel called for robust frameworks to ensure inclusion by design, build trust, and hold industry and government accountable for the social consequences of AI-driven systems.
- DEF has established 2,400+ community information resource centers and is piloting the Samrid Gram project with the Indian government's Department of Telecommunications as a model for AI-enabled DPI in villages.
- 65% of India’s population is not meaningfully connected to digital infrastructure, making accessibility and affordability urgent issues.
- AI-enabled DPI is recognized as non-neutral—it determines access, eligibility, and voice in the digital ecosystem, often in opaque and hard-to-challenge ways.
- Algorithmic exclusion is described as increasingly silent, preemptive, and difficult to contest, with exclusion harming communities even before the harm becomes evident.
- DEF’s grassroots evidence shows that many welfare services require costly, inaccessible, or linguistically incompatible digital authentication, barring remote and marginalized individuals from basic rights.
- Panelists stressed that current AI/data systems often prioritize data extraction from people over delivering clear benefits to them.
- The session outlined three focus areas for ethical AI DPI: 1) understanding where exclusion enters the AI-DPI stack, 2) governance mechanisms for AI as public infrastructure, and 3) ensuring inclusion and trust by design—particularly for gender, language, and industry accountability.
- Global panelists warned that as AI black-box systems proliferate, the ability even for system designers or regulators to understand or intervene in exclusionary processes is eroding.
- The social cost of opting out from AI or digital participation is rapidly rising, with exclusion risking further marginalization.
- A call was made for people-centric, affordable, trusted, and community-governed digital public infrastructure rather than solutions imposed by government or corporates.
AI Strategy to Scalable Industrial Solutions | India AI Impact Summit 2026
The session at the India AI Impact Summit 2026 focused on the practical convergence of AI and robotics—termed 'physical AI'—highlighting live demonstrations of TCS’s humanoid (Ekko) and quadruped (Poochie) robots and their deployments in real industry situations. Key contrasts between the global AI adoption trends and India’s unique opportunities were discussed, emphasizing India’s advantage to leapfrog directly to AI-native factories and industrial ecosystems. The session also introduced TCS's no-code AI orchestrator platform, enabling participants to configure and deploy AI-based workflows to physical devices in an accessible hands-on manner. The session transitioned from strategic context and technology trends to concrete case studies, hands-on experimentation, and workflow deployment by attendees, showcasing India's capacity for scalable, real-world AI-robotics integration.
- TCS showcased its first humanoid, Ekko, and quadruped robot, Poochie, both already deployed at multiple customer sites.
- Demonstrations included miniature robotic arms, automated guided vehicles (AGVs), and other AI-driven physical assets.
- Physical AI is defined as the convergence of digital AI and industrial robotics, enabling 'software-defined physical intelligence.'
- India is uniquely positioned to develop AI-native factories and industrial corridors, skipping legacy limitations faced in the West.
- A live case study from an agritech customer showed AI and robotics reducing warehouse safety incidents by 90% and operational downtime by 30%, with deployments in China (30 robots), Poland (7), and LATAM underway.
- Emphasis on 'robotics as a service' and fractional robotics to align with India’s large workforce, focusing on amplification rather than workforce replacement.
- Hands-on workshop used TCS’s no-code AI orchestrator, a platform with workflow templates for deploying AI solutions to physical robots, requiring little to no coding.
- Participants worked directly with diverse devices (robotic arms, AGVs, quadrupeds), mentored in configuring and testing real AI-driven tasks.
- Focus on addressing last-mile challenges in services like public utilities, education, and healthcare via physical AI.
- Deployment and usage of AI orchestrator platform intended to dramatically simplify and scale enterprise-level AI-robotics integration.
From Co-Design to Courtroom: Building AI for Fairer Justice Systems
The session focused on the persistent backlog in judicial systems worldwide, highlighting India's staggering 50 million pending cases. It introduced the Fair Trial Advisor (FTA), an AI-powered expert system designed to help judges and lawyers efficiently navigate the highly complex requirements of the right to a fair trial, rooted in international law. Developed in partnership with Microsoft’s AI for Good Lab, the FTA uses retrieval augmented generation to ground answers in authoritative sources, minimizing hallucinations and promoting transparency. The tool was stress-tested through a collaborative hackathon with key stakeholders at Oxford, receiving overwhelmingly positive feedback for saving time and providing structured legal guidance. Users stressed the importance of trust, transparency, clear citing of sources, and domestic adaptation. Key recommendations included narrowing the scope, piloting the tool within domestic legal systems, language accessibility, and strong safeguards against overreliance and opaque reasoning. The FTA aims to supplement—not replace—human judgment, supporting fair trial guarantees worldwide while balancing innovation with responsible integration in judicial settings.
- India faces a massive backlog of 50 million pending court cases, including 180,000 older than 30 years.
- 44% of judicial actors in a recent UNESCO survey are using AI tools in their work, but only 9% have received any training.
- The Fair Trial Advisor (FTA) is an AI system designed to help legal professionals interpret and apply the right to a fair trial, referencing over 28,000 international decisions.
- FTA leverages retrieval augmented generation (RAG) to ensure responses are grounded in a curated, authoritative legal dataset, mitigating AI hallucinations.
- Developed in partnership with Microsoft’s AI for Good Lab and based on seminal legal texts, the FTA supports users with plain-language legal Q&A, complete with citations.
- A multi-stakeholder hackathon at Oxford involved judges, lawyers, technologists, and civil society to co-design the tool, focusing on real-world needs, ethics, and governance.
- Feedback emphasized time-saving, clear structured guidance, and the need for transparent citation and sourcing.
- Participants recommended a narrow focus (starting with judges), domestic law integration, language accessibility, and safeguards against overreliance.
- Trust factors highlighted included transparent source citation, accuracy validation, regular audits, independent oversight, and explicit acknowledgment of limitations.
Transforming Business Through AI
The session on scaling AI in the enterprise at the India AI Impact Summit 2026 brought together leading founders and investors to discuss the rapid adoption and evolution of AI across Indian and global enterprises. Highlights included Automation Anywhere's deployment of 400 million AI agents in India, with widespread adoption by major banks, universities, and nonprofits, and a candid discussion of both the opportunities and challenges of scaling AI. The panel emphasized a marked shift from traditional per-seat or time-based software pricing to outcome-based and usage-based models, driven by the enterprise imperative for tangible ROI. Panelists noted that about 10% of Indian enterprises are breaking away as aggressive adopters, demonstrating that value capture from AI is no longer only about labor replacement but also significant capital and operational efficiencies, such as inventory reduction and ticketing automation. Despite these advances, there remains uncertainty around optimal usage measurement and pricing models, highlighting this as an evolving but central topic for enterprise AI's continued impact and monetization.
- Automation Anywhere has deployed over 400 million AI agents in production in India, with major penetration across top banks, GCCs, and hundreds of enterprise clients.
- A notable 10% of Indian enterprises are breaking away as leaders in AI adoption, approaching problems differently and focusing on outcomes rather than AI promises.
- AI-driven efficiencies extend beyond labor replacement—for example, automation has generated $200 million in free cash flow and enabled $400 million inventory reduction for enterprise clients.
- AI agent-based automation enabled one enterprise client to autonomously resolve 84% of all support tickets—saving $250 million in IT spend by moving away from seat-based models.
- Outcome-based and ROI-driven pricing models, rather than per-seat, time-based, or simple token-based pricing, are becoming the norm for enterprise AI deployments.
- AI adoption is already a top three strategic priority for over 70% of global enterprises, a trend mirrored in India, and enterprise AI budgets are increasing steadily.
- The ongoing challenge remains the lack of a universally accepted pricing and value measurement model, with enterprises experimenting with metrics such as debt recollection, loan origination, and issue resolution rates.
Planet-Scale Intelligence: AI for Climate and Growth
The session at the India AI Impact Summit 2026 focused on the foundational requirements and interconnected systems necessary to scale AI for inclusive economic growth in India and globally. Speakers emphasized that scaling AI is not just about technology, but also about talent, infrastructure, sustainable energy, and, critically, trust and inclusion. The discussion highlighted India's unique position to shape the global AI landscape by developing AI as a part of critical national infrastructure—building on the legacy of public digital goods such as UPI. Experts from sectors including telecom, energy, manufacturing, and policy explored challenges like high costs of AI service delivery, the need for affordable and sovereign infrastructure, and the pitfalls of fragmented institutional incentives and insufficient technical capacity. Concrete examples, such as gigawatt-scale AI-ready data centers by Reliance and global intelligence platforms for agriculture and climate resilience, illustrated both the scale and the urgent necessity for interoperable, accessible, and affordable AI systems. The session concluded with a call to reimagine AI as the next layer of digital public infrastructure, enabling 'intelligence inclusion' at population scale through coordinated public-private efforts and ecosystem-level investments.
- AI cannot scale without investments in local talent, elastic infrastructure, sustainable energy, and public trust.
- India is emerging as a leader not only as a market for AI but as a shaper of AI governance, deployment, and technology for population-scale impact.
- Reliance is building a gigawatt-scale, sovereign, AI-ready data center to lower AI service costs from ₹10/minute to affordable levels (targeting 10–20 paisa/minute).
- AI-driven solutions can address enduring challenges in sectors like education and healthcare, potentially leapfrogging traditional barriers.
- Current cost structures for AI access (e.g., AI tutors) are prohibitive for most Indians, necessitating massive domestic infrastructure ownership.
- Global case studies (from Africa, SE Asia, and Latin America) demonstrate growing government interest in planetary intelligence solutions for challenges such as climate resilience, food security, and energy transition.
- Technical and analytical capacity gaps, fragmented institutional incentives, and immature procurement systems impede large-scale adoption of AI intelligence platforms.
- Quantifying the opportunity cost of not investing in AI intelligence systems remains a policy challenge.
- Building AI as a shared, open, interoperable public good—similar to UPI for finance—can drive both innovation and inclusion.
Scaling AI in Healthcare: Evidence-Based Solutions for Silent Heart Attacks | India AI Impact Summit
In this session of JPAL's AI for Social Good seminar, Professor Zead Obermeyer from UC Berkeley highlighted the transformative potential of AI-powered low-cost diagnostic tools in improving healthcare access and outcomes, especially in low-resource settings. Obermeyer shared personal and global anecdotes underscoring widespread issues of underdiagnosis, even in developed countries, due to limited access to medical data and advanced diagnostics. He introduced a collaborative project with JPAL South Asia in Tamil Nadu, where both expensive hospital-grade tests (like cardiac ultrasounds) and affordable, user-friendly devices (such as $50-$60 mobile electrocardiograms) were used in community screenings. The team trained AI algorithms not just on doctors' interpretations, but on patient outcomes—reducing biases and improving true clinical impact. Early findings indicate that the AI screening algorithm can effectively identify high-risk individuals, including those who would be missed by traditional Western-centric risk factor approaches. Cost-effectiveness analyses show promising results, supporting the scalability of this AI-enabled approach. Plans are in place for rigorous randomized evaluations to compare the effectiveness of AI-based and traditional screening methods. The demonstration emphasized the expanding ecosystem of affordable digital health devices and the critical need to integrate AI responsibly, using local data, to democratize life-saving diagnostics globally.
- AI-driven handheld ECG devices costing as little as $60 are being used in rural Tamil Nadu for community health screening.
- Collaborative fieldwork involved both advanced hospital-grade cardiac ultrasounds and inexpensive mobile diagnostics.
- The AI is trained directly on patient outcomes (not just doctors’ interpretations) to reduce bias and enhance diagnostic precision.
- Preliminary results: flagging the top 2-5% high-risk individuals detects silent heart attacks at a rate of 10% vs. 2% in the general population.
- The AI successfully identifies at-risk patients missed by traditional risk factor models, especially relevant for South Asian populations.
- Cost-effectiveness: The approach delivers results at ~$2,000 per DALY (disability adjusted life year), within Indian guidelines for public health interventions.
- A randomized evaluation is underway to compare traditional and AI-enhanced screening in real-world conditions.
- The initiative leverages other affordable digital health tools (e.g., pulse oximeters, wearable rings, retina imaging) to further broaden data collection.
Cognitive Capital: Positioning India as the Brain of the Global AI Economy
The opening session of the India AI Impact Summit 2026 focused on the convergence of generative AI and future telecom networks, emphasizing their interdependent and transformative roles, particularly with the advent of 5G Advanced and 6G technologies. Esteemed speakers, including policy experts and industry leaders, outlined how generative AI will move from an optimization tool to an integral part of network architecture, influencing everything from network management to customer support and immersive, real-time industrial solutions. The session also addressed the challenges associated with AI-driven networks, such as security and governance, and introduced India’s technical contributions to international AI benchmarking in telecom. In parallel, industry innovators highlighted AI’s capacity for content synthesis and discovery, underscoring opportunities in ebooks, audiobooks, and multilingual content distribution, reinforcing India’s potential as a fast-growing digital market.
- The AI Impact Summit 2026 is positioned as a historic, global inflection point for India and the international AI community.
- A high-profile panel brought together leaders from government, telecommunications, global industry, and digital content sectors.
- Generative AI is expected to fundamentally transform telecom by becoming natively embedded in network architectures, especially in 6G.
- Key applications include network self-optimization, automated policy, intelligent network slicing, synthetic data generation, and digital twins for simulation.
- India’s Department of Telecommunications (DoT) has initiated technical contributions to ITU-T Study Group 13 for benchmarking generative AI in telecom.
- Security and trust concerns—such as prompt injection, data poisoning, and privacy leakage—are being prioritized, with calls for robust governance frameworks.
- Future networks (notably 5G Advanced and 6G) will allow generative AI to be deployed at scale due to high bandwidth, low latency, and edge computing.
- Generative AI's impact is also felt in content creation and distribution, with the Indian market for audiobooks projected to reach $35 billion by 2030 and rapid growth in the Asia-Pacific region (7% CAGR).
- Multilingual, multi-format content processing and discovery—enabled by AI—is a core growth area for Indian and global digital content providers.
Child-Centric AI Policy: Safeguarding India’s AI Future
The session at the India AI Impact Summit 2026 focused on the urgent need for responsible, inclusive, and practical approaches to child-centric AI. Organized in partnership with Child Light and Space to Grow, the session addressed the dual nature of AI: its vast benefits for young Indians and its increasing risks, notably the surge in AI-generated child sexual abuse material. Presenters shared survey findings highlighting Indian youths' nuanced perception of AI, recognizing both opportunity and danger, with particular concern about gendered online harm. The expert group, convened by MeitY (Ministry of Electronics and Information Technology), outlined robust, multi-stakeholder recommendations, emphasizing a shift from 'child safety' to 'child well-being,' active roles for parents and communities, solutions designed into technological architecture, and India’s global leadership—especially for the Global South. Key recommendations include establishing a Child Safety Solutions Observatory, creating an innovation sandbox, including children and youth in advisory roles, strengthening legal frameworks—especially for AI-generated abuse material—and prioritizing AI literacy and resilience in curricula for children and their guardians. The session underscored the moral and structural imperative for India to weave child well-being into the nation’s digital ambitions and position itself as a global standard-setter.
- Child-centric AI session led by Child Light and Space to Grow in partnership with MeitY to address AI and child safety.
- Over 300 million children globally were victimized by technology-facilitated abuse in 2024; a 1,325% year-on-year increase in AI-generated sexual abuse material reported.
- A Child Light poll of 410 young Indians revealed that while AI is seen as powerful and beneficial, only one in four feel online spaces are safe; girls feel notably more vulnerable than boys.
- 48% of young respondents see tech companies as primarily responsible for online safety, followed by parents, carers, and the national government.
- Recommendations include shifting from a 'child safety' to a 'child well-being' framework to better capture opportunities and risks.
- Proposal to create a Child Safety Solutions Observatory to aggregate innovations and best practices at national level.
- Establishment of a Global South working group and network, with India’s leadership highlighted.
- Launch of an innovation sandbox and challenge in Q2 2026 to combat digital harms to children.
- Recommendation to set up a Youth Safety Advisory Council, ensuring meaningful child and youth involvement.
- Call for clearer legal frameworks to criminalize both synthetic and actual child abuse material, with the burden of proof on tech industry actors.
- Urgency to introduce impact assessments (potentially ISO-style certification) for high-interaction AI systems affecting children.
- Advocacy for national curricula to integrate AI safety, well-being, and digital literacy for children and parents, filling gaps in existing education.
AI as a Public Health Gamechanger
The session at the India AI Impact Summit 2026 focused on the transformative impact of AI in healthcare, highlighting significant advancements, challenges, and policy developments both in India and globally. Leading stakeholders—including government officials, healthcare administrators, international collaborators, and startup ecosystem leaders—discussed the expanding adoption of AI in medical devices, diagnostics, hospital workflows, and public health programs. The Netherlands shared its national AI implementation strategy, emphasizing ethical guidelines and regulatory frameworks under the EU AI Act, while Indian representatives showcased pioneering applications such as AI-powered early detection for TB and breast cancer, and integrated digital public infrastructure (e.g., the Ayushman Bharat Digital Mission). Throughout, the need for responsible AI usage, clinician acceptance, and embedding AI solutions within real-world workflows for equitable healthcare delivery was repeatedly stressed. The panel highlighted the shift from pilot projects to scaled implementations, the convergence of public and private sector innovation, and the critical role of ethical, interoperable AI frameworks in realizing affordable, accessible healthcare.
- Philips has introduced AI in imaging devices, enabling more efficient diagnosis with fewer scans and lower radiation exposure.
- Hospital efficiency is increasing as AI tools help serve more patients (from 20-25 to potentially 50 daily on some machines), reducing healthcare costs.
- 'Software as a medical device' is gaining ground in India, with numerous startups entering the space.
- The Netherlands has shifted from AI pilot projects to large-scale implementation, guided by a national program and recent 2025 policy note 'Realization of AI in Healthcare.'
- The EU AI Act provides the world's first risk-based legal framework for AI, supporting responsible, safe, and effective adoption, especially in healthcare.
- EU has allocated 1 billion euros for scaling up AI adoption and integration, including in healthcare.
- Recent WHO designation of Delft University's center as the first global center for AI health governance.
- Indian healthcare institutions are building innovation cells and deploying AI-powered solutions like chatbots and digitized workflows to improve patient experiences.
- AI applications in India already include mass TB screening with handheld X-rays and AI, and early detection tools for breast cancer at the population level.
- Medical imaging equipment in India, such as CT and MRI machines, are increasingly AI-enabled for higher accuracy and efficiency.
- Integration of AI into public health workflows and digital public infrastructure (e.g., Ayushman Bharat Digital Mission with open APIs) is recognized as essential for health equity and impact.
- Ethical use and clinician trust remain key concerns, emphasizing that AI should be used under supervision by healthcare professionals and not as standalone by laypersons.
AI and India’s Economic Growth
The session at the India AI Impact Summit 2026 commenced with addresses from senior leadership of the Institute of Chartered Accountants of India (ICAI), emphasizing their pioneering role in integrating artificial intelligence (AI) within the chartered accountancy profession. Honorable past president Charanjot Singh Nanda highlighted the proactive initiation of AI training and certification for both ICAI members and students, resulting in over 30,000 AI-certified chartered accountants since July 2024. Vice President Mangesh Kindra reinforced AI’s practical applications for sectors like MSMEs, finance, and healthcare, while underscoring the critical importance of ethical standards. President Prasana summarized key recent national policy announcements: the union budget 2026-27 features massive investments—such as 21 years of tax holidays for data centers, a national mission to establish over 38,000 GPU compute units, and deep tech funds exceeding ₹20,000 crores to drive R&D and AI startups. The government projects that by 2035, AI could add $500–600 billion to India’s GDP, particularly boosting productivity in finance and manufacturing. Panel discussions initiated with sectoral deep-dives, notably focusing on banking and automobiles, with an analysis of how AI is driving financial inclusion, digitization, and financial market growth. The session positioned ICAI as an essential steward of responsible and ethical AI-mediated transformation for India’s economic and professional landscape.
- ICAI began AI-focused upskilling and certification in 2024; over 30,000 chartered accountants are now AI-certified.
- Union Budget 2026-27 allocates: 21-year tax holidays for data centers, over 38,000 national GPU compute units, ₹10,300 crore for AI startups, and ₹40,000 crore for Semiconductor Mission 2.0.
- Deep tech fund with ₹20,000 crore for AI R&D and tax holidays for AI startups up to 2030.
- Skilling initiatives introduced AI education into curricula of 15,000 schools nationwide.
- ICAI’s course 'Aura' and hackathons aim to create real-world AI solutions for students and professionals.
- AI projected to inject $500–600 billion into GDP by 2035; finance and manufacturing could see 20–25% productivity gains.
- Financial sector digitization: digital payments via UPI, 52+ crore Jan Dhan accounts, ₹2.3 lakh crore deposits, and fintech spend at $830 million (since 2024).
- Emphasis on strong ethical frameworks for AI implementation, reinforcing ICAI’s role as custodian of trust, regulatory compliance, and public accountability.
- AI seen as key enabler for MSMEs, healthcare, startups, and broader community welfare—aligning with India’s 'AI for all' approach.
AI for India’s Next Billion
The session at the India AI Impact Summit 2026 convened global policymakers and sector leaders to discuss AI governance, accountability, and the imperative for inclusive AI-driven growth in the Global South. Speakers underscored the diffusion of accountability in AI, the roles of governments and companies, and the risks of repeating historical power imbalances in technology deployment. India's achievements in digital public infrastructure were used as a blueprint for equitable AI models, advocating open, interoperable, multilingual, and accessible systems. Climate action was addressed through integrating digital and decarbonization strategies, emphasizing the necessity to consider the environmental footprint of AI and the need for intentional sustainability and equitable value extraction from data originating in the Global South. The discussion concluded with a call for context-sensitive regulatory guardrails, drawing on global legal traditions to ensure that AI development and climate action advance together, equitably, and sustainably.
- AI accountability was highlighted as diffused across platforms, governments, and users, risking unclear responsibility unless regulatory clarity is established.
- Panelists emphasized that corporate self-regulation is insufficient; external oversight is essential for credible AI governance.
- Citizens and users must be empowered to demand accountability and redress in AI systems.
- India's digital public infrastructure enabled 50 years of progress in seven years (per BIS), via open-source, globally interoperable systems, serving as a model for AI platforms.
- AI must be multilingual, affordable, accessible, and targeted to uplift populations below the poverty line; otherwise, it risks exacerbating inequality.
- The Global South provides substantial data for AI models (e.g., Indian data trains large language models more than the US), but risks having value extracted without commensurate return.
- Open public digital layers are advocated to stimulate private-sector innovation while ensuring broad societal benefits.
- Climate action and the digital/AI revolution must be pursued in tandem—AI can optimize renewable energy, agriculture, and disaster response, but data centers' resource demands (energy, water) require intentional sustainability.
- AI strategies must transparently measure environmental impact, prioritize local energy-efficient applications, and embed equity in global AI governance.
- Calls were made to reject false binaries between digital growth and climate action, emphasizing that solutions must be context-sensitive and rooted in diverse legal and social traditions.
Emerging Tech Futures for India : India AI Impact summit 2026
The session at the India AI Impact Summit 2026 centered on India's journey towards achieving digital sovereignty in artificial intelligence (AI) and data infrastructure. Key stakeholders from government, industry, and academia underscored the urgency of transitioning from being AI consumers to AI architects, focusing on homegrown AI models, sovereign cloud and chip infrastructure, and robust, scalable architectures tailored for India's unique needs. Indian officials highlighted progress in renewable energy for data centers, investments and global partnerships for indigenous chip design, and foundation models competitive on international benchmarks. The discussion emphasized that true digital sovereignty is about strategic, responsible autonomy—having control over technology choices, data, and operational processes—while leveraging global collaboration where necessary. IBM and other industry leaders stressed that sovereignty is not isolation, but about building resilient, open governance frameworks that ensure data, tech infrastructure, and mission-critical AI systems align with national priorities and societal values. The ultimate goal is to use sovereign AI to solve real-world challenges at scale, particularly in healthcare, education, and agriculture, driving inclusive growth and creating blueprints for global adoption.
- India aims to shift from AI consumer to AI architect, focusing on indigenous development across the AI stack.
- Three non-negotiable pillars for digital sovereignty: indigenous models tailored for Indian context, sovereign infrastructure (including domestic cloud and GPU clusters), and scalable, resilient architectures.
- India's strength in renewable energy and low-cost infrastructure is attracting global investment in domestic data centers.
- Indian designers constitute about 20% of global chip designers, with ongoing initiatives to build sovereign capabilities in chip design and manufacturing.
- Six to seven new foundational Indian AI models are in the pipeline, with existing ones performing well against global benchmarks.
- Sovereign AI is framed as strategic control, not necessarily complete self-sufficiency—global partnerships can coexist with autonomy in design and deployment.
- Sovereign AI should address India-specific challenges such as low-bandwidth environments, energy efficiency, and local language inclusion.
- Bhashini, the national language translation mission, enables digital inclusion across 36 Indian languages.
- IBM projects that by 2030, most global enterprises (outside the US) will have formal digital sovereign strategies.
- Three dimensions defined for enterprise digital sovereignty: data sovereignty (governance and compliance), technology sovereignty (open, auditable stacks), and operational sovereignty (control over critical processes).
- Sovereignty is about choice and control, not isolation, achieved through open standards, hybrid cloud, and secure data governance.
The Engines of Intelligence – Scaling AI Infrastructure & Security
The session focused on the critical infrastructure and operational challenges faced by organizations in scaling AI deployments, emphasizing the need for security to be integrated from the outset rather than as an afterthought. Panelists highlighted that networking, not just compute, is an escalating bottleneck as AI scales, with significant training time spent on network traffic. Ethernet’s resurgence as the industry standard for AI data centers was stressed, replacing older proprietary technologies. Specific challenges in India and South Asia were discussed, including slow technology adoption, skill gaps among network engineers, and serious power and cooling constraints in data centers. Policy recommendations included the need for holistic, coordinated strategies for infrastructure and energy policy at the state and national levels. Real-world security incidents involving state-sponsored attacks on critical infrastructure such as airports and financial institutions underscored the urgency for advanced, AI-powered security solutions capable of rapid response and anomaly detection. These experiences demonstrated the high stakes of AI infrastructure in national security, requiring robust, context-aware defenses and swift operational practices.
- Ownership and responsibility for AI risk remain unclear in most organizations, often causing security to be neglected during initial project phases.
- Security must be incorporated from the earliest stages of AI system development; retrofitting later is ineffective and risky.
- A live audience poll indicated networking – not compute or data pipelines – is viewed as the chief bottleneck in large-scale AI deployments.
- 20–50% of training time in large AI clusters is devoted to inter-node network traffic.
- AI data centers are trending toward using standard Ethernet networks due to advancements in silicon and protocols such as RDMA and RoCEv2, enabling better performance and future-proofing.
- The number and quality of network cables are critical—large clusters may require 30,000 cables, and cable failures can cripple performance.
- India and South Asia lag global AI deployment speeds due to slower technology adoption, skills shortages among engineers, and infrastructure constraints like insufficient power and cooling.
- Existing Indian data centers struggle to support the high rack-level power needs of AI clusters, limiting where workloads can be deployed.
- Policy fragmentation at the state level hampers strategic data center and power planning; a national, integrated infrastructure roadmap is recommended.
- Recent real-world cyber-attacks targeted Indian critical infrastructure (airports, finance), handled by AI-powered rapid-response security solutions able to defend within minutes to hours, highlighting the new normal of AI-driven cyberwarfare.
- Modern threats require behavioral and context-aware anomaly detection beyond simple pattern matching, as attacks are increasingly fabricated with AI.
AI, Innovation, and Collaboration for Resilient Economies
The session at the India AI Impact Summit 2026 focused on the critical intersection of AI infrastructure, energy systems, and policy coordination necessary for resilient national economies in the face of rapid AI-driven data center expansion. Featuring a panel with leading figures from government, utilities, grid management, technology, and hyperscale infrastructure, discussions outlined the imperative for meticulous, preemptive planning and technical innovation to manage the immense, variable loads from AI data centers. US policy directives were highlighted, emphasizing international cooperation and streamlined regulation, while Indian experts detailed the technical and regulatory upgrades required for stable, efficient integration of hyperscale data centers into national grids. Industry leaders from Intel and AWS presented hardware and architectural innovations aiming at energy optimization and performance scaling. The consensus: aligning technological advances, updated grid planning, distributed generation, stringent regulatory requirements, and robust public-private coordination is essential to ensure that AI infrastructure acts as a multiplier for economic resilience, not a threat to grid reliability or consumer costs.
- The US has adopted a three-pillar AI Action Plan: innovation leadership, accelerated AI infrastructure (with an executive order to ease federal data center permitting), and international AI diplomacy (launching the American AI exports program).
- Indian utilities are experiencing massive hyperscale data center growth near urban centers, with individual facilities demanding up to 1 GW of power, necessitating high-voltage (220kV+) infrastructure and comprehensive, pre-planned siting.
- Grid India forecasts that by 2030, data center demand could match the total electricity needs of entire states (8-10 GW), presenting new grid stability and balancing risks due to their large, volatile, and inverter-based loads.
- Indian regulators are urged to mandate resource adequacy, reserve obligations, and local storage/captive generation for data centers, as well as updated grid codes to cope with these new, fast-ramping loads.
- Technological solutions from Intel cited include major advances at the silicon and system levels (e.g., RibbonFET, PowerVia, FO packaging), claimed to boost performance per watt and reduce data center power consumption by 15% at the silicon level alone.
- Panelists stressed the need for comprehensive modeling, national transmission planning, and coordinated regulatory upgrades to prevent unplanned costs or grid destabilization for end consumers.
- The summit underscores the necessity of aligning technological innovation, infrastructure investment, and policy reform across international, national, and industry stakeholders.
AI Competitiveness: Turning Insight into Action
The panel at the India AI Impact Summit 2026 discussed the pivotal drivers for national competitiveness in artificial intelligence, emphasizing the need for robust infrastructure, open and interoperable technology ecosystems, and a strong, well-connected talent pipeline. Leaders from AMD, Anthropic, and policy think tanks highlighted India's rapid progress in sovereign AI deployments—where the government funds and shapes AI infrastructure, models, and data stacks to serve national interests. Emphasis was placed on scaling up responsibly with governance structures, regulatory guardrails, and fostering international cooperation. India was recognized as one of just four countries globally investing significantly across all AI stack layers, notably emerging as the second largest international user of Anthropic’s Claude model for advanced engineering and coding tasks. The panel called on governments to accelerate digital skills training, mainstream everyday AI adoption, and prioritize governance frameworks to ensure safe and inclusive AI innovation. The evolving AI landscape is thus becoming increasingly multipolar and collaborative, with India positioned at the forefront.
- A new AI innovation playbook is in development, aimed at practical frameworks for scalable national AI progress.
- India's data center and compute infrastructure investments are expanding rapidly, with several gigawatts of promised new capacity.
- Open, interoperable enterprise AI ecosystems were advocated as critical for scaling and tapping into India's large talent base.
- Anthropic is ramping up international operations, identifying India as the second biggest market for its Claude model outside the US, especially in complex engineering and coding applications.
- By January 2026, 50+ countries had launched 140 government-driven 'sovereign AI' projects; only India, South Korea, Sweden, and Taiwan invest comprehensively across data, models, and compute infrastructure.
- India is moving from the 'catch-up' to the 'leadership' phase in the global AI race, with active government investments spanning the whole technical stack.
- Multipolarity is replacing the China–US duopoly, with distributed AI capabilities and ongoing international partnerships.
- Governments must facilitate enabling environments with regulatory clarity, digital skilling programs, and institutional support for responsible and widespread AI deployment.
- Agentic AI and governance (not just regulation) are becoming central themes for 2026, highlighting the need to balance innovation speed with oversight.
AI for Social Good: Aligning Research with National Priorities | India AI Impact Summit 2026
The opening session of the India AI Impact Summit 2026 emphasized the transformative potential of artificial intelligence across key domains such as healthcare, education, governance, agriculture, and labor markets, especially in the context of real-world impact in India and the global south. Distinguished speakers, including leaders from JPAL and Nobel laureate Michael Kremer, underlined the importance of rigorous, evidence-based evaluation of AI initiatives to distinguish true impact from technological hype, drawing parallels to previous technology fads such as the 'one laptop per child' program. The Summit showcases research presentations and panels examining how AI-enabled programs—like personalized crop disease alerts for farmers and AI-driven court efficiency—influence tangible outcomes for different communities. Speakers also stressed the need for broad coalitions involving policymakers, researchers, implementers, and funders to generate scalable solutions that are both innovative and socially beneficial. Examples like India's AI-aided monsoon forecasting for millions of farmers served as evidence of AI’s emerging real-world benefits. Ultimately, the session set the agenda for a day focused on practical, rigorous assessment of AI impact, ensuring advances improve lives equitably and measurably.
- India AI Impact Summit 2026 is the first such summit held in the global south and focuses on real-world evidence of AI's societal impact.
- JPAL's AI Evidence initiative and the broader summit agenda are centered around empirically evaluating AI's effect, moving beyond hype.
- Six key impact areas for AI discussed: improved targeting and needs prediction, personalized support, frontline service augmentation, bias reduction, organizational efficiency, and progressive taxation.
- Concrete case studies highlighted included AI-powered flood prediction in Bihar, personalized farm assistance via mobile, and AI enhancements in court systems (e.g., Adalat AI).
- Michael Kremer cited AI-driven weather forecasting that provided 38 million Indian farmers with accurate monsoon predictions, validated by randomized control trials.
- Panelists cautioned against overreliance on untested solutions, referencing failed 'one laptop per child' efforts as lessons learned.
- The event draws participation from government leaders, big tech firms (Anthropic, Google, Meta), donor agencies (Gates Foundation), researchers, and implementing partners.
- A rigorous approach—testing for who benefits and under what conditions—is stressed to ensure AI initiatives reach their intended goals and populations.
AI in Healthcare: Innovation, Ethics, and Regulation
The panel at the India AI Impact Summit 2026 featured leading medical experts discussing transformative applications of AI across cardiology, oncology, and reproductive medicine. Dr. Tanuji showcased how AI has revolutionized cardiology, shifting the field from reactive to preventive care by enabling rapid and accurate detection of cardiac events, enhancing imaging modalities, and supporting complex interventions through simulation and virtual reality. Dr. Swarupa articulated AI's holistic role in oncology, from early detection and accelerated, personalized diagnostics to optimized treatment planning and management of patient workflows. Dr. Kiran described substantial improvements in reproductive medicine, citing AI-driven precision throughout fertility treatment; from personalized ovulation induction, objective egg and embryo assessment with high accuracy, to advanced embryo selection using time-lapse imaging. Across domains, panelists highlighted AI's potential to democratize healthcare by bridging gaps between rural and urban health infrastructure, reducing subjectivity, and empowering clinicians with faster, more accurate, and personalized care. Panelists also emphasized ongoing challenges, particularly around clinical trials and validation for AI-based tools within healthcare systems.
- AI has enabled the shift from reactive to preventive cardiology by forecasting myocardial events and predicting patient outcomes.
- Optical Coherence Tomography (OCT) image analysis, powered by AI, allows precise device sizing and tailored cardiac intervention strategies.
- AI in oncology drives early cancer detection, personalized treatment regimens, improved diagnostic speed, and outcome prediction.
- AI-supported radiology and radiation oncology enable targeted treatments with minimized impact on healthy tissue.
- Reproductive medicine benefits from AI through personalized ovulation induction, objective assessment of egg and embryo maturity (with reported 97% efficacy), and embryo selection using time-lapse imaging and analysis.
- Remote patient management and democratization of healthcare are facilitated through AI-enabled telehealth platforms, enhancing access for rural populations.
- AI-based 3D simulation and virtual reality support advance planning for complex cardiac procedures, improving clinician training and patient safety.
- Clinical trials for AI tools remain a major challenge, as robust, evidence-based validation is needed before widespread adoption in medical practice.
Building Trusted and Rights-Respecting AI Infrastructure
The panel at the India AI Impact Summit 2026 focused on the foundational requirements for building a secure and trusted AI ecosystem in India. Key industry leaders and experts from fields spanning infrastructure, law, global governance, and enterprise sectors discussed the core principles, bottlenecks, and best practices necessary to enable safe, scalable, and inclusive AI transformation. Panelists highlighted the need for resilient-by-design AI infrastructure, emphasizing uninterrupted power and connectivity, comprehensive regulatory clarity (especially regarding tax regimes and privacy laws), and the importance of market-led liability solutions rather than over-regulation. Global multistakeholder platforms like the IGF were recognized for fostering inclusive governance and capacity building, attuned to local realities. Enterprises face significant structural and operational challenges, notably in aligning business, technology, legal, and risk teams around governance, ROI, and compliance, as well as talent shortages. Leaders agreed that embedding resilience from the outset requires a structured approach to ROI, alignment of cross-functional goals, investment in both technology and human capital, and the creation of robust operational guardrails. The session concluded with the soft launch of a report consolidating these insights.
- Resilient-by-design AI infrastructure in India must prioritize uninterrupted power (including renewables), modern data center capabilities, and operational efficiency via AI-driven operations.
- India lags behind in investments toward trusted AI infrastructure, despite global commitments exceeding $600 billion from leading tech firms.
- Fragmentation in resource availability (electricity, water) across states and lack of a unified tax regime are major policy obstacles holding back AI infrastructure growth.
- Market-driven risk classification and liability allocation are preferred over additional regulatory interventions for fostering enterprise trust in AI systems.
- Enterprises face structural bottlenecks, including misaligned KPIs among business, technology, legal, and finance teams, leading to ROI and governance disconnects.
- Talent shortage and insufficient cross-functional understanding hinder effective AI governance implementation within organizations.
- Global platforms like the IGF facilitate inclusive governance by enabling developing countries (like India) to help shape AI norms based on context-specific needs rather than importing foreign models.
- A structured approach to AI projects—focusing on business-driven ideation, upfront ROI calculations, total cost of ownership (TCO) modeling, and minimum lovable product (MLP) deployments—can enhance resilience and reliability.
- Legal interpretations around India's new privacy law (DPDP Act) and best practice guidelines are pivotal for enterprises navigating compliance in AI model development.
- A report consolidating panel insights and best practices was soft-launched at the session.
Mobilising Finance for High-Impact AI Solutions
The session at the India AI Impact Summit 2026 brought together leading experts from multilateral development banks (MDBs), the Gates Foundation, and AI use case leaders to discuss strategies for scaling AI solutions for social good and economic growth, particularly in emerging economies. Panelists highlighted the necessity of mobilizing both capital and operational funding to bring validated AI use cases in key sectors like healthcare, education, and agriculture to scale. The World Bank underscored the transition of AI from a 'nice-to-have' to a 'mission-critical' tool in development and warned of increasing digital divides, as evidenced by low AI adoption rates in low-income countries. Both the World Bank and Asian Development Bank (ADB) emphasized the need for innovative, blended finance models that combine concessional funds, grants, guarantees, and private capital, as traditional infrastructure financing approaches are insufficient for the rapidly evolving AI landscape. Collaborative frameworks like the Full Mutual Reliance Framework between MDBs were highlighted as effective for lowering transaction costs and accelerating digital investments, exemplified by projects in Nepal. Discussion also focused on the importance of supporting 'Small AI'—targeted, scalable AI interventions for low-tech and rural environments—and stressed the need for comprehensive cost-benefit analyses, robust regulatory reform, regional cooperation, and human capital development to ensure sustainable, impactful AI investments.
- AI is now considered mission-critical in development, surpassing its previous status as an incremental innovation.
- Low-income countries currently account for only 1% of ChatGPT subscriptions; South Asia's GenAI traffic is just 20% of high-income countries'.
- MDBs recognize traditional project finance models are inadequate for AI; emphasis now is on blended finance (including grants, guarantees, philanthropic and private capital).
- World Bank and ADB pioneered the 'Full Mutual Reliance Framework,' enabling shared due diligence and faster project delivery (first digital investment in Nepal).
- Priority is on scaling up 'Small AI'—targeted, offline-capable AI solutions for widespread, low-resource settings.
- Investment in AI necessitates foundational digital infrastructure (data centers, power grids, connectivity), human capital, and regional integration.
- ADB notes a surge in AI investment demand, often for data centers, but warns of supply-driven approaches and stresses need for proper cost-benefit analysis.
- Regional and inter-MDB cooperation is critical, given the size and complexity of AI investments and impacts.
- Emphasis on regulatory reforms (e.g., Philippines' common tower framework) to unlock private sector participation.
- Developing countries' AI strategies must consider both immediate applications and long-term, diffuse benefits, requiring adaptable and sustainable funding models.
📅 Sessions from 2026-02-18
AI for Development Conversations: India AI Impact Summit 2026
This session at the India AI Impact Summit 2026 focused on the intersection of open-source AI, community empowerment, and sustainability in the context of India and the Global South. Speakers discussed the unique challenges faced by community organizations in adopting and influencing technology, emphasizing that the lack of involvement in design processes and limited resources often lead to dependency on proprietary solutions. The panel highlighted the promise of open-source AI for transparency, accountability, and the creation of public goods, but also flagged persistent issues: true openness is frequently lacking, indigenous contributions are constrained by unpaid labor, and current models risk being unsustainable or exploitative. Concrete examples, such as the Gates Foundation's Sunmati project and the 'idli stack,' demonstrated emerging approaches to building inclusive, community-centric AI datasets and solutions. The discussion underscored the urgent need for new business models to fairly reward contributors, avoid reinforcing dependencies on tech giants, and ensure freedom both in cost and agency (the 'libre' aspect). Philanthropic capital and innovative models of decentralized collaboration were proposed as avenues for sustaining and scaling truly open, community-driven AI ecosystems.
- Community organizations often lack involvement in technology design, leading to poor adoption and reliance on unaffordable or incompatible vendor solutions.
- Open-source AI is championed as a tool for accountability, transparency, and public collaboration—helping civil society hold governments and corporations to account.
- The practical meaning of 'open source' in AI is questioned: openness often does not extend to data, training methods, or full codebase access.
- Global South contributors face greater challenges due to unpaid labor and lack of resources, necessitating supportive patrons or backers.
- Sustainable open-source AI requires new business models that fairly compensate contributors; pure voluntarism or dependence on big tech company support is insufficient.
- The Gates Foundation's Sunmati project paid 20,000 women from diverse language backgrounds to annotate culturally relevant data, creating a dataset as a public good.
- A decentralized model like the 'idli stack' allows communities to address local challenges through collaborative, low-cost open-source solutions, with philanthropic capital as a key enabler.
- There is a call for market mechanisms—such as transparent pricing for data annotation labor—to emerge and make open-source contributions more equitable and sustainable.
- The risk of 'open-source washing' is highlighted: some projects labeled 'open source' do not meet the standards of freedom, access, or community governance.
- The session recognized a broader imperative: developing a 'libre' (freedom-based) ecosystem, balancing cost-free use with participant agency and autonomy.
The National AI Stack: From Compute to Commercial Impact| AI Impact Summit 2026
At the India AI Impact Summit 2026, the session focused on the critical infrastructure and evolving paradigms required for India and other nations to achieve 'sovereign AI nation' status. Nvidia's leadership articulated a strategic framework for developing AI factories—the full-stack infrastructure integrating energy, chips, data centers, models, and applications. Emphasizing a move from traditional chip-centric approaches to full-system co-design, the session highlighted new scaling laws in AI (pre-training, post-training, and test-time/reasoning), the massive compute and networking requirements of next-gen reasoning models (such as mixture-of-expert architectures), and the rapidly growing demand for tokens and inference accuracy in generative AI. AI evolution was traced from perception AI and deep learning to generative, agentic, and now physical AI, with the assertion that, moving forward, widespread autonomization will demand national-scale sovereignty and localized innovation. Cost efficiency (token economics) is being realized through full-stack innovation, positioning India to scale AI deployments in industries like finance, healthcare, manufacturing, and agriculture—thereby transforming daily life and accelerating India's self-reliance in global AI leadership.
- The summit champions the idea of nations building their own AI infrastructure to ensure AI sovereignty.
- Nvidia presented a 'five-layer cake' blueprint for AI factories: energy, chips, infrastructure, models, and applications.
- Three fundamental scaling laws—pre-training, post-training (fine-tuning), and test-time reasoning—are driving compute demand.
- Mixture-of-experts models like Kimi K2 utilize selective expert activation to increase efficiency and reduce costs, but require high-performance, all-to-all network communication.
- AI is transitioning through eras: from perception AI to generative AI to agentic (reasoning-enabled) AI, and now toward physical AI (autonomous systems).
- Model parameters are growing at approximately 10x per year (2021-2025), and annual token requirements for inference are increasing 5x.
- Token economics are improving, with per-token inference costs dropping due to full-stack innovation, not just faster chips.
- India is leveraging AI sovereignty and localized AI to drive sectoral transformation in finance, healthcare, manufacturing, and agriculture.
- AI factories now incorporate hundreds of thousands to millions of GPUs, signaling a fundamental redefinition of data center and AI infrastructure design.
- Extreme co-design across all infrastructure layers (compute, networking, storage) is highlighted as crucial for future AI scalability and cost control.
LIVE: The Future of India | AI for All, AI by Her & YuvaAi Awards | Grand Cultural Night
The India AI Impact Summit 2026 Award Ceremony and Cultural Evening brought together top policymakers, industry leaders, innovators, and youth to celebrate India's growing leadership in artificial intelligence (AI) with a strong focus on inclusivity and global impact. The event highlighted India's position on the world stage through bold ideas, diverse participation, and transformative solutions to real-world problems. Notable dignitaries, including Union and State Ministers, secretaries from MeitY, and leading industry figures such as Roshni Nadar Malhotra, emphasized AI’s transformative potential for all segments of society. The award ceremony recognized 28 winners across three major global challenges: AI for All, AI by Her (empowering women), and Yuva AI (youth under 21), drawn from 15,500 registrants across 136 countries with a strong showing from the Global South. Special mention was made of multi-national collaborations and mentorship, and corporate initiatives encouraging early and diverse talent into STEM and AI careers. The session also underlined the necessity of addressing representation and innovation gaps by engaging more women and youth, with real-world examples like HCL’s tech talent pipeline and global AI training programs breaking attendance records. The evening showcased both India’s accomplishments and its forward-thinking, inclusive strategies to democratize AI and lead the journey toward a developed India by 2047.
- India AI Impact Summit 2026 convened policymakers, industry leaders, academia, and global innovators to highlight India's AI leadership.
- 28 awards distributed across three key global challenge categories: 10 for AI for All (global challenge with entries from multiple Global South nations), 10 for AI by Her (women-led innovation), and 8 for Yuva AI (youth under 21).
- The Summit drew 15,500 global registrations spanning 136 nations, with 4,600+ applications and participation from 70+ Global South countries.
- Emphasis on mentorship with prototypes refined by global experts before final awards.
- Leadership comments stressed India's progress surpassing global averages in women’s participation in STEM and AI, highlighting sectoral opportunities and ongoing challenges in mid and senior management.
- Roshni Nadar Malhotra detailed HCL's program directly hiring grade 12 students with non-engineering backgrounds—10,000 strong cohort with 50% women, and a goal for 35% of new hires to come from this route.
- World record-setting online training in generative AI and Python delivered in collaboration with UP government and South Asian Women in Tech.
- Youth-led innovation spotlighted through the work of Varun Sikka (Stanford researcher, founder of Carbon Inc.), encouraging rapid prototyping and risk-taking in AI solutions.
- Policy remarks reiterated Prime Minister Modi’s vision of AI as central to 'Aspirational India' and a driver toward the 'Viksit Bharat 2047' goal.
- Inclusivity and representation foregrounded as fundamental for resilience and quality in the AI and tech ecosystem, with strategies to improve early entry and leadership retention for women and underrepresented groups.
Collaborating to Scale AI Adoption in the Global South
The panel at the India AI Impact Summit 2026 focused on the necessity and opportunities of multistakeholder collaboration to drive responsible AI adoption, particularly for the Global South. Moderated by Mr. Rahman Pandi of the Center for Responsible AI, the session brought together distinguished leaders from academia, industry, NGOs, and government advisory bodies spanning the UK, Africa, and India. The discussion highlighted ongoing efforts—such as the UN's AI advisory board report advocating for global AI governance, Google's collaborative AI work for Africa in weather forecasting and language inclusivity, and various initiatives aimed at leveraging partnerships between stakeholders. Emphasis was placed on the importance of diverse perspectives to ensure innovation serves local needs, inclusion, and measurable impact, with calls for integrating marginalized voices, sharing resources, and shaping global AI policy frameworks that benefit all, not just affluent countries.
- UN's AI advisory board (including Professor Dame Wendy Hall), comprising 40 multidisciplinary members with half from the Global South and China, released the 'Governing AI for Humanity' report (September 2024) focusing on inclusive global AI governance.
- UN General Assembly implemented the recommendation to establish a global scientific board for AI policy guidance; further dialogues with policymakers are planned for launch at the upcoming AI for Good Conference in July.
- Google Research Africa, led by Dr. Aisha Wal Bryant, drives innovations in AI with solutions designed in Africa for global impact, spanning climate, health, sustainability, and education.
- Key Google initiatives include weather nowcasting for Africa using satellite data to overcome lack of radar infrastructure and open-sourced speech/language datasets collected via university and community partnerships to advance African language inclusion.
- Panel participants represented a diverse cross-section—academia, industry, NGOs, and government advisors—spanning multiple continents, underlining the session's focus on collaboration across geographies, sectors, and disciplines.
- Panel underscored challenges in data scarcity (e.g., only 37 weather radar stations in Africa), need for localized solutions, and ownership of data by affected communities.
- Strategy for responsible AI emphasizes impact measurement, scalability, and mutual benefit, especially for marginalized communities.
- Session reinforced the criticality of involving the Global South and China in major AI governance conversations to ensure technology benefits humanity equitably.
AI for ALL Challenge & Panel on Leveraging AI for Development in the Global South
The session featured two groundbreaking AI-powered solutions addressing critical challenges in agriculture and climate resilience in India. The first, Biomakers' Big Crop Technology, leverages soil microbiome DNA sequencing, environmental data, and AI to provide actionable insights for soil health, enabling more sustainable farming practices and achieving a 20% reduction in agrochemical fertilizer use. The company, originally from California and now operating globally including India, collaborates with local partners to adapt its technology for India’s vast agricultural sector. The second solution, Resilience 360 by Resilience AI, offers a single-window AI system for hyperlocal climate risk management, analyzing six types of natural disasters and providing risk, compliance, and resilience insights at the building level with 96% confidence in under 30 minutes. Having deployed its solution in 84 villages and 8 cities across India, and with recognition from the National Disaster Management Authority and partnership with industry players, Resilience 360 is advancing both disaster mitigation and business continuity. Both teams highlighted the importance of integrating user feedback (farmers and urban planners, respectively) to enhance their platforms, demonstrating the rapid evolution and localization of AI solutions for India's unique environmental and socio-economic landscape.
- Biomakers' Big Crop Technology uses soil microbiome DNA, environmental data, and AI to generate functional and ecological predictions for soil health management.
- Solution validated by 10 years of global data, scientific publications, and multiple patents (some already granted).
- Biomakers has achieved a documented 20% reduction in agrochemical fertilizer use using its platform.
- Global expansion strategy includes a new partnership in India with laboratory Agrael, aiming to scale soil health intelligence nationwide.
- Business models include price-per-test and price-per-acre, catering to different agricultural stakeholders.
- Continuous product improvement is driven by direct feedback from farmers, optimizing user experience and the relevance of data presented.
- Resilience 360, an AI-powered single window system, provides localized disaster risk assessment (six types), resilience benchmarking, and compliance tools for users at any administrative or business function.
- Platform delivers hyperlocal (building-level) risk insights in under 30 minutes with 96% confidence.
- Deployed with the United Nations in 84 villages and 8 cities in India, with ongoing pilots in major verticals such as data centers and hospitality.
- Operational in three countries—India, North America, and the Philippines—which together face 50% of global natural disasters.
- Recognition and endorsement from India's Ministry of Home Affairs—National Disaster Management Authority.
- Proprietary models incorporate climatic, geological, and structural data (database of 8.9 million structures), leveraging both visual and scientific AI for urban and rural resilience.
- Early revenue traction ($200k locked in under two years) with over 10x growth projected and imminent breakeven.
- Urban and rural solution applicability demonstrated with references to direct disaster response scenarios.
From Average to Top 1%: How Creators Are Using AI to Leapfrog the Competition
The panel at the India AI Impact Summit 2026 convened top Indian digital content creators and educators to discuss the transformative role of AI in content creation, consumption, and global influence. Led by Viraj, the conversation featured Prakhar Gupta (host of a leading podcast), Naman Deshmukh (tech and AI content creator with 8 million followers), and Ishan Sharma (AI educator). The speakers highlighted India's immense position as both the largest content consumer and producer globally, and the opportunities AI presents to democratize content creation, erase cosmetic barriers (like accent and location), and automate the process. Practical insights included the creation of AI-powered content avatars and automated channels, with examples of channels reaching 1 million followers in a few months through full automation. The discussion underscored content as a game of storytelling and soft power, arguing that India's vast, tech-savvy youth can project influence worldwide via AI-boosted media. While AI models keep rapidly iterating (e.g., OpenAI and Anthropic launches), the real challenge and opportunity lie in leveraging these tools to upskill, adapt, and build applications that serve India's narrative globally. Speakers emphasized storytelling, relatability, constant learning, and pattern recognition as key AI-era skills. The session concluded with a call to harness this 'levelling of the playing field' to cultivate India's long-term soft power and global standing.
- India produces and consumes more digital content daily than any other country.
- AI erases traditional barriers in content creation (accent, language, appearance, geography).
- Automated, AI-powered content channels (e.g., Instagram) can hit massive reach: one cited creator grew to 1 million followers in 4-5 months, most viewers unaware the channel is AI-run.
- Tools like custom GPTs, HeyGen, 11 Labs, and extensive personal data training enable creators to automate scriptwriting, avatar creation, and content delivery.
- Less than 5% of content creators in the audience use AI avatars, indicating significant potential for mainstream adoption.
- Recent rapid advancements in AI models (OpenAI's GPT-5.3, Anthropic's Claude 4.6 Opus) intensify competition and capability.
- Panelists urge focus on using AI for upskilling, judgement, testing, and pattern recognition, not just model-building.
- Content is reframed as core to national soft power and global influence, with India uniquely advantaged due to its demographic and technological base.
- Storytelling, relatability, and strong points of view remain core; automation augments, but doesn’t replace creator authenticity.
- Panel frames AI as 'levelling the playing field' globally for Indian creators, presenting a multi-decade soft power opportunity.
From Policy to Harvest: AI-Driven Agricultural Transformation
The session 'From Policy to Harvest: Leveraging Generative AI for Data-driven Agriculture Transformation' at the India AI Impact Summit 2026 focused on how generative AI and data-driven technologies are revolutionizing Indian agriculture. Key speakers, including Shri Sanjay Kumar Agarwal, Joint Secretary of the Department of Agriculture, emphasized the ongoing transition from traditional farming to a technology-empowered ecosystem. The Indian government highlighted the launch of 'Bharat Vistar', an AI-driven module aimed at delivering advisories and services directly to farmers in multiple local languages, thus boosting inclusivity and access. Extensive datasets from schemes like PM Kisan are being leveraged to improve AI advisories and validation. The critical challenges tackled include market access, accurate forecasting, smarter supply chains, storage, finance, and mechanization. The World Food Programme (WFP) representative showcased global and India-specific AI solutions for reducing post-harvest losses, such as route optimization for distribution networks and smart warehouse robotics for real-time spoilage detection. Panelists underscored the vast, underutilized agricultural data pool and the necessity for responsible, scalable, and inclusive AI integration to advance food and nutritional security, efficiency, and farmer incomes.
- Launch of 'Bharat Vistar', an AI-driven module by the Government of India, focused on reaching every farmer with multilingual support.
- Extensive data from government schemes like PM Kisan (covering around 100 million farmers each month) is being integrated into digital agriculture architecture.
- AI-powered chatbots are now available to provide advisories, grievance redressal, and personalized information to farmers, overcoming language barriers.
- Integration of all markets and mandis with digital systems enables real-time price discovery and informed decision-making for farmers.
- Private sector partnerships are enhancing access to storage, credit, and logistics for farmers, reducing dependence on immediate distress sales.
- AI-based weather advisories and climate risk modeling are now more accurate and accessible, helping farmers manage uncertainty.
- WFP’s deployment of smart warehouse solutions, including autonomous robots for spoilage/pest alerts and sensors, aims to decrease India's large post-harvest losses.
- WFP introduced a global AI strategy, including real-time tools like Hunger Map Live for crisis forecasting and early warning.
- The panel highlighted the huge but currently underutilized agricultural data from 160 million hectares, 150 million farms, and thousands of markets.
- Young and educated farmers are embracing technology, encouraging a shift to diverse, export-oriented, and more lucrative agricultural practices.
Stacked for Scale: Semiconductors and Foundational AI
The session at the India AI Impact Summit 2026 highlighted India's growing leadership in the global AI hardware stack, with a focus on the India Semiconductor Mission (ISM) and the rise of domestic semiconductor startups. Shri Amitesh Singh, CEO of the India Semiconductor Mission, detailed the accomplishments of ISM 1.0, including support for startup-led design, and announced the transition to ISM 2.0, which will expand the ecosystem across design, manufacturing, packaging, R&D, equipment, materials, and workforce skilling. Concrete progress includes the approval of 24 design projects (with significant VC funding), 10 manufacturing and packaging projects (including fabs), and a vision to create world-class “fabless” semiconductor companies. The following panel discussion featured prominent Indian founders building cutting-edge silicon hardware, underscoring India’s ambitions to become a leader in GPUs, power chips, and SoCs. The panelists and officials acknowledged that semiconductor industry development is a long-term, marathon-like endeavor, but outlined concrete timelines: by 2028-2030, existing projects will be at capacity, with many more to come, and India aims to foster billion-dollar, and eventually multi-trillion dollar, chip companies. Dramatic advances in AI model development and a robust startup ecosystem lend optimism to India's emergence as a technology superpower.
- India is moving from being a technology consumer to a primary architect of the global AI hardware stack.
- ISM 1.0 supported 24 startup design projects; 14 have secured VC funding and are advancing towards manufacturing.
- 10 manufacturing/packaging projects (1 silicon, 1 silicon carbide fab, 8 packaging units) are approved and launching commercial production soon, targeting full capacity by 2028-2030.
- ISM 2.0 to expand focus to include: higher investment in design, continuous manufacturing and packaging incentives, new R&D verticals for advanced technologies, and critical supply chain elements (equipment, chemicals, materials).
- Workforce skilling will see a national push across all semiconductor verticals, updating curricula and training programs.
- Strong emphasis on creating homegrown 'fabless' billion-dollar semiconductor companies, with ambitions to grow to the scale of global giants like Nvidia and Broadcom.
- India's AI startup ecosystem is advancing rapidly; 12 new AI model launches this week and a track record of surpassing global benchmarks in Indic languages.
- Analogies presented: India's rise in digital payments (from outside the top 100 in 2015 to 60% of global real-time payments by 2025) and space-tech startups (from 2 in 2015 to over 300).
- By 2030, at least 10 major semiconductor production units are expected to be fully functional, with the potential for this number to rise to 15-18 with ISM 2.0.
- The government and ecosystem are aligned for a decades-long journey, viewing semiconductor leadership as a marathon requiring sustained, multi-generational effort.
Building AI Readiness Among Frontline Health Workers
This session at the India AI Impact Summit 2026 convened leaders from the Meghalaya state health system, Gates Foundation India, WHO, academia, and sector experts to address the role of AI and digital competency in healthcare transformation, particularly in underserved and hard-to-reach regions. Dr. Valerie opened with a call for interactive discussion, outlining the necessity of integrating digital tools—such as electronic health records, telemedicine, real-time surveillance, and AI-supported clinical decision-making—into health delivery. Principal Secretary Dr. Satkumar described Meghalaya’s urgent health challenges, notably a maternal mortality rate three times the national average, and detailed the state's strategic focus on building frontline worker digital and AI capacities. He emphasized shifting the identity of frontline staff toward problem-solving and local innovation, enabled by better data use and decentralized leadership. Shama Shridas from the Gates Foundation revealed a new, health-focused MOU with Meghalaya that, in response to AI's rapid rise, now centers on developing comprehensive competency frameworks for health workers. She underscored that unlocking the full value of AI tools requires mapping existing skills, identifying gaps, and supporting frontline workers—recognizing the diversity and specific needs within various cadres. The panel collectively stressed the importance of trust, safety, and readiness within communities and health systems to maximize AI’s transformative potential for universal, affordable care. The session’s conversation signals a pivotal move from deploying digital tools as mere add-ons to systematically embedding capacity-building and competency assessment for sustainable tech adoption in healthcare.
- Meghalaya faces a maternal mortality rate three times above the national average, with 25% of villages hard to reach.
- State strategy centers on 'building state capabilities,' emphasizing decentralized leadership and empowering frontline health workers through digital and AI skills.
- A new Memorandum of Understanding (MOU) has been signed between the Government of Meghalaya and the Gates Foundation focused on digital health, now pivoting toward AI readiness.
- Digital tools discussed include real-time surveillance, electronic health records, telemedicine, and decision-support apps tailored for local contexts (e.g., Asha App, Mother App).
- Current underutilization of digital health tools is attributed to a mismatch between design/features and the real competencies/capabilities of frontline health workers.
- Session introduces a systematic, competency-based framework for AI and digital tool adoption, involving the mapping, measuring, and building of worker competencies over time.
- A call for participatory, community-centric design of AI tools to ensure equitable, trustworthy, and safe integration into healthcare delivery.
- Recognition of the urgent need for upskilling health workers to adapt to rapid technological change and combat digital misinformation in health.
- Broad collaboration between state government, international organizations (WHO, Gates Foundation), and academic institutions to address these challenges.
Bridging the Global AI Divide: From Principles to Practice
The session, led by Pratima Harit of Lenovo, brought together CSR leaders from major technology companies to discuss strategies for advancing AI and future skills development in India, with strong emphasis on inclusion, gender diversity, and structural transformation. The panel explored collaborative models between industry and academia for integrating real-world AI skills into curriculum and pedagogy, stressing the unpredictability of 'future skills' and the critical need for adaptive learning environments. IBM outlined a bold vision to skill 5 million individuals in AI, quantum, and cybersecurity by 2030, leveraging curricular integration, hands-on learning, and infrastructure investment, while Capgemini highlighted the urgent necessity for upskilling faculty to keep pace with rapidly evolving technology demands, emphasizing collective industry effort, especially for underserved regions. Micron Foundation addressed persistent gender gaps in STEM, noting the importance of intersectional approaches and sustained support structures, with data revealing a stark disparity between women's educational attainment in STEM and their workforce participation. The discussion underscored practical, outcome-driven metrics for evaluating AI skilling programs, particularly around employability and career progression, using examples from established programs and active partnerships.
- Panel comprised CSR leaders from Lenovo, IBM, Capgemini, Micron, Readington, and Kindr, all technology firms.
- IBM announced a goal to upskill 5 million people in AI, quantum, and cybersecurity by 2030 through curriculum integration and hands-on opportunities.
- Industry-academia partnerships are pivoting towards project-based learning, curriculum-level AI integration, and dedicated AI labs.
- Capgemini raised the obsolescence of traditional entry-level tech skills due to automation and generative AI, calling for urgent, collective, large-scale faculty upskilling, focusing on new in-demand areas like prompt engineering.
- Micron Foundation cited World Economic Forum data: 48% of STEM students in India are women, but only 18% enter STEM jobs, identifying critical gaps in confidence and network-building.
- Wii STEM initiative, in partnership with UN Women, is supporting ITI girls in India to access AI training combined with confidence and CV-building modules.
- Panel emphasized the need for intersectional approaches to inclusion, addressing challenges unique to women from low-income or marginalized backgrounds.
- Metrics for success in skilling are shifting from enrollment volumes to outcome-based measures, such as employability and income progression.
- Active encouragement for delegates to visit partner booths (CSR Box, Wii STEM) for hands-on exposure to ongoing initiatives.
Putting AI to Work: Solving the Productivity Challenge
The opening session of the India AI Impact Summit 2026 set a high-level context for the strategic importance of AI adoption as a general-purpose technology analogous to historical advances like steam engines, electricity, and computing. Kevin Allison, President of Manurva Technology Futures, emphasized that AI's rapid diffusion holds the key to unlocking economic productivity and global competitiveness. Referencing Professor Jeff Ding's research, speakers highlighted that success in past technological revolutions depended more on broad, economy-wide diffusion and upskilling than just inventing new technologies. Examples from India’s Future Skills initiative and Singapore’s digital CTO programs illustrate the proactive public policy approaches for scaling digital and AI capabilities, especially among small and medium enterprises. The discussion positioned India and other Global South/majority nations as prime candidates to leapfrog through attentive policies focusing on skilling, infrastructure, and ecosystem readiness. The session also outlined the government’s potential role in building the digital ‘roads and bridges’—cloud, data, and standards—that can support rapid, inclusive AI adoption, stressing that the right policy interventions today could dramatically accelerate productivity benefits and global leadership tomorrow.
- AI is positioned as the next general-purpose technology (GPT), comparable to historic innovations such as steam, electricity, and computing.
- Rapid diffusion, rather than mere innovation, determines which countries benefit the most from technological revolutions.
- Jeff Ding's recent book and supporting research highlighted the necessity of building widespread skills infrastructure to enable country-wide adoption.
- India's Future Skills initiative and Singapore’s digital CTO-for-hire program are cited as global models for enabling digital upskilling and AI adoption among SMEs.
- Governments are urged to invest in 'digital infrastructure'—skills, cloud computing, standardized datasets, and interoperability frameworks—to foster AI adoption.
- The session underscores that developing economies, particularly in the Global South, have an opportunity to innovate and lead in AI diffusion.
- The evolving human-computer interface, enabled by generative AI, demands continuous adaptation in digital skills training programs.
- AI policy discussions are being conducted under the Chatham House Rule, emphasizing open information sharing without attribution.
- No traditional Q&A—instead, participant-driven discussions to follow at an informal reflection event.
- Panelists from industry heavyweights like Amazon and influential startups like LocoBuzz are set to deepen the discussion on public-private collaboration in AI scaling.
Campus-to-Impact: Building India’s AI Innovation Pipeline
The opening session of the India AI Impact Summit 2026, hosted by Professor Mangala Jain of Titankar Mahavir University, emphasized India's pivotal role as an emerging AI innovator, not merely a consumer of global technology. The panel featured accomplished leaders from academia, industry, biotechnology, international policy, and innovation ecosystems, including Dr. Nikhil Agarwal (entrepreneurship and startup expert), Dr. Kavita Singh (healthcare and R&D leader), Maya Sherman (AI researcher and diplomat), and Utkarsh Mishra (ecosystem strategy leader). The discussion centered on bridging gaps between research, industry, and education to foster scalable, impactful AI solutions. Critical themes included: the dangers of academic and professional silos, the need for human creativity and critical thinking alongside technological adoption, focusing on real-world impact and necessity rather than pure technological novelty, commercial viability in product development, and a holistic, stakeholder-inclusive approach. The panel offered actionable advice for students and campuses: nurture interdisciplinary collaboration, prioritize user and societal needs, consider commercial scalability from the outset, and remember that the ultimate utility of AI lies in serving people rather than just advancing algorithms.
- India positioned as an emerging global leader in AI innovation, with a move from technology consumption to creation and problem-solving.
- The summit audience included students, startups, industry, and academia, reflecting a drive for a robust, inclusive AI talent pipeline.
- Stop working in silos: Universities and innovators must foster interdisciplinary collaboration to maximize AI's impact.
- Prioritize creativity and critical thinking over overreliance on automation or AI platforms.
- Student and campus innovators must think beyond prototypes—commercial scalability and market fit are essential from the start.
- The failure of campus startups is often due to lack of commercial deployment planning and poor cost control.
- In healthcare, and generally, technological adoption hinges on addressing true user needs and societal context, not just technical performance.
- Dr. Kavita Singh cited a failed DNA COVID vaccine (with over ₹200 crore investment) as an example of technology not adopted due to user skepticism.
- Panelists stressed actionable takeaways: at least one new idea, connection, or action for each participant.
- Panel comprised leading experts with international recognition, underlining the summit's high-profile nature and commitment to cross-sector expertise.
AI in Fintech: Solving for India at Scale
In this session at the India AI Impact Summit 2026, PhonePe's technology leadership offered a comprehensive overview of how the company is strategically integrating AI and large language models (LLMs) to scale financial inclusion and operational excellence across India. Highlighting their immense reach, with over 650 million registered users and 52–53 billion transactions valued at around $870 billion over six months, PhonePe shared the philosophy behind 'internal-first' AI adoption: investing in robust infrastructure and domain-specific intelligent systems before wider deployment. Emphasizing security, compliance, and modular integration over disruptive change, PhonePe described their systematic approach—building a proprietary AI/ML platform and agent framework (codenamed 'Godric'), enabling model deployment in a secure, compliant, and scalable architecture. Special attention was paid to adapting LLMs to sensitive data environments and optimizing for developer and operational productivity, including specialized layers for fraud detection and document validation. The session underscored their commitment to auditability, quota management, and transparent workflows, ensuring AI innovation supports both regulatory demands and nationwide, inclusive growth.
- PhonePe serves over 650 million registered users, 47 million merchants, and processes approximately 300 million monthly app users and 52–53 billion transactions (worth ~$870B in six months).
- Company adopts an 'internal-first' AI deployment strategy: iterate and mature AI technologies and workflows internally before broad customer-facing rollout.
- Investment in proprietary AI/ML infrastructure—specifically, their in-house AIML platform and 'Godric' LLM Gateway for secure, controlled, and scalable deployment across both own data centers and select cloud services.
- Robust compliance and data security measures—ensuring customer data never leaves national boundaries and internal authorization/audit systems track all LLM/API interactions.
- Domain-specific AI/ML models are prioritized for key processes, such as fraud detection, document validation, and operational efficiency, rather than relying solely on generic tools.
- All major AI deployments are carefully integrated with PhonePe’s homogeneous tech stack without disrupting established workflows, leveraging mature internal platforms for quota, audit, and orchestration.
- Scalability is engineered from the ground up: models are run locally on user devices and within data centers without Kubernetes, emphasizing simple, stateless container orchestration.
- Commitment to ongoing, organization-wide AI literacy and improvement, with engineering teams as early adopters and continuous feedback loops for efficiency tuning and safeguards.
Foundation AI for Food, Energy, and Health: Powering Sectoral Transformation| AI Impact Summit 2026
The session at the India AI Impact Summit 2026 spotlighted the rapid strides India has made in AI, backed by robust investments, research, and international collaborations. India now boasts the world’s third most vibrant AI ecosystem and is home to the second largest AI project repository on GitHub. Substantial investments—over $11 billion in the past decade, with record inputs from Google and Tata—are fueling AI development. The government-backed India AI Mission has deployed more than ₹10,000 crore since its 2024 launch, targeting sectors like agriculture, healthcare, and smart cities. The panel brought together academic and research leaders from India and Canada, with a strong focus on Indo-Canadian collaborations for AI-driven innovation in food, energy, health, and critical minerals. Saskatchewan’s agricultural expertise and resource wealth, matched with India’s technology and research scale, offer complementary strengths. Key Indian institutions, including IIT Ropar and CSIR labs, showcased pioneering work in agri-tech AI and sustainable mineral extraction, leveraging AI/ML to accelerate progress. The summit emphasized how strategic partnerships, especially in research and educational initiatives, will shape the next phase of AI impact, fostering innovation, policy evolution, and significant socioeconomic benefits across both nations.
- India is ranked 3rd globally in the vibrancy of its AI ecosystem (Stanford Global AI Report) and hosts the 2nd largest AI project repository on GitHub.
- Over $11 billion in AI investment secured in India between 2013 and 2024, with Google investing $15 billion and Tata $11 billion in the past year.
- Government of India has deployed 10,000+ crore INR (approx. $1.2 billion) through the India AI Mission since April 2024.
- Key focus areas under India’s AI Mission: agriculture, healthcare, and smart cities (with energy as a cross-cutting theme).
- IIT Ropar highlighted that 30% of India’s deep tech agri-tech startups originate from the institute.
- Significant progress in Indo-Canadian collaboration, notably with the University of Saskatchewan and premier research links in agriculture, critical minerals, and health sectors.
- Saskatchewan province produces 25% of the world’s uranium and 30% of potassium and exports 85% of Bangladesh’s chickpeas and pulses.
- AI applications are accelerating sustainable practices like residue management for bioenergy, hydrogen production, and transitioning to net zero emissions by 2050.
- CSIR (Council of Scientific and Industrial Research) operates 38 labs and is a Center of Excellence for critical minerals, deploying AI/ML across materials exploration and extraction.
- CSIR also runs AI education initiatives, including outreach to school children and PhD programs, widening AI talent pipelines.
- Parallel collaborations underway with Korea and Israel, enhancing India’s AI diplomacy and strategic partnerships.
Democratising AI Compute and Data for Entrepreneurs
This session at the India AI Impact Summit 2026 tackled the complex challenge of democratizing AI in India, spanning from foundational access to data and compute infrastructure, to national policies, standards-setting, and real-world applications. Panelists emphasized that true democratization requires more than just open AI models—it depends on reliable data access, robust privacy frameworks, secure and equitable compute resources (including quantum and advanced chips), localization through language models, and operational excellence. Notably, India has dramatically improved compute availability (e.g., subsidized GPUs for startups) but must still overcome challenges in IP creation, value addition, and equitable hardware access. The discussion stressed India’s unique position, holding 40% of the world’s AI talent, and described the urgent need to nurture domestic innovation, foster standards participation at the global level, and leverage AI to address local use cases, from agriculture to inclusive commerce. Incubators across states, industry-academia partnerships, and the growth of unicorn startups highlight grassroots AI impact, while panelists warned of digital colonialism if India fails to secure technological sovereignty. The session underscored AI as a critical lever for economic transformation, jobs, and even electoral politics, calling for policies that secure India’s place at the AI table globally.
- India has procured approximately 38,000 GPUs and offers compute to startups at a subsidized rate of 280 rupees/hour as part of the India AI Mission.
- India possesses 40% of the world’s AI talent, but lags in IP creation and local use case development.
- Seven layers of the internet infrastructure were highlighted, with emphasis on root servers, submarine cables, and broader geopolitics affecting AI democratization.
- The government is pushing for both short- and long-term plans to build national compute capabilities, including quantum computing.
- Initiatives include establishing AI incubators in all 72 districts of Uttar Pradesh and fostering industry-academia partnerships (IIMs and IITs).
- Open-source large language models and localization efforts (BharatGPT, Bhashini) are active, with a push for language inclusion and small LLMs.
- Operationalization and secure handling of massive data streams is identified as an under-discussed but critical challenge.
- Panels called for greater Indian participation in global standards bodies (e.g., ISO, ITU) to influence AI governance.
- Success stories like Glance AI (18 content looks per selfie, 450 suppliers connected, used on 550 million devices) illustrate local AI innovation.
- Warning issued: lack of domestic hardware and infra control could lead to digital colonization, with policy seen as vital for safeguarding AI sovereignty.
How AI Is Powering Development Across the Global South
The session at the India AI Impact Summit 2026 showcased an emerging Indo-Korean partnership to fuel science, technology, and innovation-driven entrepreneurship, especially in Punjab. Key organizations including IIT Ropar, Startup Punjab, Invest Punjab, and Korea’s UD Impact outlined collaborative plans to leverage AI, CSR funding, and education platforms for scaling up startup support and talent development. UD Impact highlighted its AI-powered entrepreneurship platform, history of incubating over 30,000 startups in Korea, and current expansion into India and Japan. Concurrently, Invest Punjab detailed proactive efforts to position Mohali as a deep-tech hub with world-class infrastructure, regulatory facilitation, and incentives, including specific policies to attract IT, semiconductor, ESDM, and aerospace investments, complemented by the creation of the Punjab Incubator Network. IIT Ropar illustrated the regional strength in deep-tech incubation and R&D, supporting more than 400 startups and aiming to integrate with Korean expertise, particularly in semiconductors and AI. Together, the session underlined a strong commitment to global collaboration, enabling the region to become a significant driver in AI, deep-tech innovation, and job creation.
- UD Impact (Korea) has enabled over 30,000 startup launches in Korea and is expanding its entrepreneurship education and AI-powered LMS to India and Japan.
- New collaborations will see Startup Punjab, Invest Punjab, IIT Ropar, and UD Impact join forces, focusing on science, technology, and innovation.
- UD Impact's AI-based LMS now supports 10,000 startups annually, providing AI-driven personalized coaching and scalable training.
- Punjab is pushing aggressive investment and startup policies, with Mohali highlighted as a future deep-tech and semiconductor hub (including a Tata Advanced System investment of 5,000 crore in SCCL).
- Brand Mohali and Brand Punjab campaigns are being launched to attract startups and tech firms, leveraging cluster institutions (IIT Ropar, ISB, DRDO labs, etc.).
- Punjab offers a unique, unified regulatory framework guaranteeing fast-track single-window clearances for businesses (e.g., startup approvals in 5 days in green industrial areas).
- IIT Ropar has incubated 400+ deep-tech R&D startups, ranked 3rd in India by number of startups, and leads the PINE (Punjab Incubator Network) network of 36+ incubators.
- Specific mention of Infosys establishing a major 6,000-capacity campus in Mohali and ongoing discussions with GCCs, drone companies, and R&D centers.
- Planned new fiscal incentives and second round of state-led grants for startups, aiming to encourage mature startups to relocate to Mohali.
- Korean and Indian collaboration to focus especially on deep-tech, AI, and semiconductor ecosystems through joint incubation and talent development initiatives.
AI Assurance in Healthcare, Manufacturing, Mobility, and Governance| India AI Impact Summit 2026
The session at the India AI Impact Summit 2026 brought together leading academics who outlined the urgent need for robust, cross-sectoral frameworks to measure and evaluate AI systems in India. Speakers emphasized that with AI permeating domains such as governance, healthcare, and manufacturing, siloed methods of assessment are insufficient—there must be adaptable, domain-translatable evaluation metrics and methods. Key thrusts included the importance of open, explainable, and trustworthy AI in healthcare tailored to Indian demographics, the shift in manufacturing toward human-centric, auditable, and transparent AI solutions, and the development of multilingual, multicultural, and auditable government AI services. The panel proposed pilot programs for foundational metrology, unified interfaces across diverse languages, edge-case handling, and real-time operations monitoring—all while aiming to establish internationally-aligned standards and compliance frameworks unique to India's needs. The session concluded with a critique of benchmark-driven AI evaluation, proposing a broader, more context-sensitive approach as AI models grow more widespread and powerful.
- Cross-sectoral AI measurement frameworks are being prioritized to handle AI's integration across governance, healthcare, manufacturing, logistics, and more.
- India-Germany collaboration has initiated open healthcare AI benchmarking platforms that can be tailored to local and international demographics and medical practices.
- Major focus areas in healthcare include explainability, trust, personalized preventive care, and public health AI to mitigate doctor-patient ratio challenges.
- Manufacturing metrics must address transparency, bias, robustness, data drift, and real-time operational auditing for Industry 5.0 (human-centric AI).
- There is a proposed unified citizen interface for governance AI—auditable, multicultural, multilingual systems with explainable decision-making (including pilots in government projects like Aadhaar).
- Real-time governance and compliance require standardization, traceability, audits, and bridging data diversity gaps.
- The panel critiques benchmark-based AI evaluations; advocates for more holistic, functional, and context-driven approaches to AI impact assessment.
- Strong emphasis on trust, accuracy, and explainability as foundational pillars for deploying AI at large scale in India.
AI-Powered Assessment and the Future of Student Success| India AI Impact Summit 2026
The session at the India AI Impact Summit 2026 featured an in-depth presentation on Letrus, an AI-driven literacy program in Brazil, and an analysis of its measured impact through rigorous evaluation. Letrus addresses Brazil’s significant literacy gap by providing AI-powered, personalized feedback on student writing and actionable insights for teachers and policymakers. A randomized controlled trial (RCT) conducted in partnership with JPAL demonstrated notable improvements: high student and teacher engagement, significant increases in writing proficiency, and a tangible narrowing of achievement gaps between public and private school students. The evaluation's credibility enabled Letrus to scale rapidly as an official public policy tool across Brazil, resulting in substantial educational gains at state and national levels and recognition by UNESCO. The session underscored that Letrus’ success is rooted in context-aware implementation, curriculum integration, and infrastructure adaptability, rather than algorithms alone. The discussion spotlighted the vital role of robust evidence in driving AI policy adoption, innovation in public education, and sustainable scaling of impactful AI solutions.
- Brazil faces severe literacy challenges: 90% of adults are not fully proficient, and significant gaps exist between public and private school students.
- Letrus’ AI platform provides personalized writing feedback and data-driven insights, aiming to close educational inequality gaps.
- A major randomized controlled trial (RCT) in the state of Espírito Santo involved 178 schools; 110 schools used Letrus, 68 served as controls.
- High engagement was achieved: 95% of teachers used the tool; 82% of students submitted essays via the platform.
- Results showed a 17-point improvement in writing test scores and a 35% increase in teacher-student one-on-one engagement.
- The program reduced the gap between public and private school students’ writing scores by 9%.
- After implementation as public policy, Espírito Santo rose from 11th to 1st in the national writing ranking within one year.
- Letrus has now scaled to 1,600+ schools, serving 500,000+ current students, and more than 2 million cumulatively.
- No contracts with government networks have been cancelled or not renewed; all partnering states improved their performance.
- Letrus’ implementation strategy emphasizes offline capability, curriculum adaptation, and administrator insights beyond just AI algorithms.
- Letrus was recognized as the best education technology globally by UNESCO in 2020.
AI for ALL Challenge & Panel on Leveraging AI for Development in the Global South
The transcript covers three major AI-driven innovations presented at the India AI Impact Summit 2026. The first solution leverages AI and proprietary patented technology to scan, grade, and digitize the quality of agricultural produce, eliminating human bias, enabling remote bidding, and increasing farmer incomes by 8–20%. This system is already operational at scale across India and is being exported globally. The second solution, Madunitra AI, targets diabetic retinopathy screening by providing a camera-agnostic, offline-capable AI diagnostic tool that helps bridge the gap in specialized care, with validated accuracy above 97% and real-world deployment across 41 health centers, already screening over 10,000 patients. The third initiative, Helios 2.0 by Tinkerbell Labs, is an inclusive education middleware platform that adapts and transforms educational content for students with special needs. It acts as a bridge between content creators and assistive technologies, aiming to democratize personalized learning and is currently in beta deployment, building on the success of its predecessor across 300 Indian centers. Together, these efforts demonstrate India’s leadership in deploying AI for social equity, transparency, and large-scale impact across agriculture, healthcare, and education.
- An AI-powered digital infrastructure now scans each fruit for defects, size, color, and weight, removing human bias and standardizing India's agricultural market.
- The platform links AI-certified quality data directly to pricing, enabling remote bidding and boosting farmer incomes by 8–20%.
- Over 70,000 metric tons of fresh produce digitized and sorted, with machinery exported internationally to markets like Brazil, Chile, and Spain.
- Technology built on five granted patents and responsible AI benchmarks, with backing from Google Launchpad, NASCOM AI for Good, and Nvidia Inception.
- Madunitra AI provides diabetic retinopathy screening using a camera-agnostic, high-accuracy (>97%) AI solution and is deployed in 41 health centers with support from Wadwani AI and the Indian Ministry of Health.
- Over 10,000 patients have already been screened for diabetic retinopathy, leveraging 80,000 annotated images and advanced AI for reliable results.
- Business models include device rental/sales (agri-tech), percentage-based marketplace partnerships, and health sector collaborations.
- Helios 2.0 enables seamless adaptation of educational content for students with various disabilities, connecting teachers, content creators, and assistive tech.
- Helios 2.0 is slated for roll-out across 300 centers, with a long-term vision to support district, state, and potentially nationwide implementation alongside India's NEP (National Education Policy).
Inclusive AI at Scale: Turning Social Impact into Sustainable Markets
The session at the India AI Impact Summit 2026 centered on bridging global divides in AI access, the emerging need for 'AI diplomacy,' and India's strategic role as both a model and provider of scalable, inclusive AI solutions for the global south and beyond. Panelists stressed the urgency of establishing AI governance frameworks that give developing nations substantial representation—not just on risk, but on development, opportunity, and scale—while highlighting India's open innovation and digital public infrastructure as successful templates. Corporate leaders, including those from Microsoft, emphasized the transition from AI experimentation to full-scale deployments with real population-level impact, particularly in critical sectors like healthcare, finance, and public service. The conversation further addressed the criticality of robust compute infrastructure, sovereign cloud investment, globally interoperable yet locally adaptable frameworks, and the necessity for widespread upskilling. Ultimately, the session underscored India's potential not only to drive equitable AI adoption domestically, but also to serve as a solutions provider for similarly situated economies worldwide.
- AI diplomacy and governance must address fragmentation and ensure meaningful participation from the global south on key issues like data governance, algorithms, accountability, safety, and cross-border data flow.
- Recent steps, such as increased representation of developing countries on UN scientific panels, signal progress toward inclusivity.
- India’s open-source Digital Public Infrastructure (DPI) is acclaimed as a trusted, scalable template for democratizing AI benefits.
- Panelists called for governance frameworks that are globally interoperable and locally adaptable—balancing universal trust with specific developmental contexts.
- Development-centric AI governance is needed, with focus areas such as healthcare and education at population scale for emerging economies.
- A shift from AI pilots (2024-2025) to full-scale, impact-driven deployments is underway, spearheaded by ambitious leadership and an emphasis on ROI and widespread upskilling.
- Emphasis on treating AI compute infrastructure as a utility, with investments in sovereign, secure, and energy-efficient cloud resources to lower barriers for enterprises of all sizes.
- Open and modular solutions, coupled with cost-efficient, foundational platforms like Aadhaar, enable nation-scale impact and create replicable models.
- India’s achievements position it to export AI solutions and technical guidance to other emerging markets facing similar social infrastructure challenges.
AI for All: Jobs, Growth, and Opportunity
The session at the India AI Impact Summit 2026 brought together leading experts to discuss the intersection of AI, technology, and climate disruption on India's labor market. Sabina Dewan, President of the Just Jobs Network, painted a contrasting picture of India's economy, highlighting the coexistence of impressive GDP growth and persistent jobless growth, particularly affecting India's 370 million youth. She emphasized that unemployment figures are misleadingly low due to the compulsion to take up informal or low-quality jobs, while formal skilling and social protection coverage remain critically insufficient. Both Sabina Dewan and Dr. Cla Melamed from the UN Foundation underscored the urgent need to address the pace and scale of AI-driven disruption, calling for global governance frameworks for wealth redistribution, robust social protection, and a more nuanced, localized approach to labor market transition. The panel called out the inadequacy of current institutions and policies to keep up with these changes, warning of cascading effects across sectors and communities if adaptation is slow. OpenAI representatives acknowledged the complexity of these systemic challenges and expressed willingness to collaborate with policy researchers and economists to guide government interventions tailored to the Indian context.
- India's official unemployment stands at 3.1%, but youth unemployment is triple that rate; 370 million Indians are aged 15-29, a population larger than any industrialized nation's total.
- Only 4.1% of India's workforce self-identifies as having any formal skills, highlighting a vast skilling gap despite years of policy focus.
- Economic growth has not translated into proportional job creation—so-called 'jobless growth' persists even as innovation and entrepreneurship rise.
- Over 90% of Indian employment is informal; disruptions to the small formal sector will have disproportionate impacts and cascading effects throughout the broader economy.
- AI, climate change, and energy transition are converging forces disrupting labor markets, demanding new approaches beyond sectoral thinking.
- Current social protection and wealth redistribution systems are inadequate, both at the national and global levels; the panel stressed the need for new institutions to manage wealth transfers and transitions.
- Migration—of money, jobs, or people—will be a politically contentious but inevitable part of adjusting to rapidly shifting labor markets.
- CEEW identified 48 million potential new jobs across 36 value chains in the green economy, but access depends on who is affected most by AI and climate-related disruptions.
- Institutional and policy responses are currently not keeping pace with the scale and speed of AI impacts; collaboration between government, research, and industry is essential.
AI in Action: Building and Deploying Enterprise AI Workers| India AI Impact Summit 2026
This session at the India AI Impact Summit 2026 focused on the practical transformation of enterprises into AI-native organizations by addressing key challenges related to legacy operations, data silos, fragmented AI systems, and responsible AI deployment. The speakers presented a comprehensive approach through the development of an AI Operating System (AIOS) for enterprise operations, emphasizing the importance of data-context unification, orchestration, governance, and modularity for rapid deployment, scalability, and compliance. Announcements included the launch of a new AI cloud data center in Chennai with liquid-cooled GPUs, the introduction of the AI Studio for streamlined model lifecycle management, and the achievement of ISO 42,000 certification, positioning the platform as both a sovereign and compliant solution for Indian enterprises. The session also featured hands-on masterclasses and live demonstrations of AI workers, showcasing domain-specific use cases and collaborative development, underlining the summit's commitment to both innovation and practical enterprise enablement.
- Enterprises are challenged by data silos, fragmented AI islands, lack of context unification, and slow project-to-production cycles.
- An AI Operating System (AIOS) is under development, comprising infrastructure (including sovereign GPUs), a semantic context layer, orchestration (AI workers, tools, integrations), voice AI systems, governance/audit layers, and SDKs for multi-channel integration.
- Launched a new AI cloud data center in Chennai with liquid-cooled GPUs-as-a-service, expanding to over five Indian metro data centers.
- Introduced AI Studio for end-to-end AI lifecycle management—supporting model development, deployment, governance, and scaling, with serverless and integrated DevOps.
- YA Cloud is pursuing, and expected to be the first Indian hyperscaler to achieve, ISO 42,000 certification, with full data plane and control plane compliance in India.
- Platform offers 40% faster time to deploy, 30% lower total cost of ownership (TCO), and 100% compliance.
- Live masterclass and demo session highlighted multi-domain AI worker applications, with hands-on participation and voting, focusing on practical, repeatable deployment in HR, IT, and finance.
AI for Social Good: Nonprofit Innovations at Scale
The session at the India AI Impact Summit 2026 highlighted the critical and often overlooked role of nonprofits in shaping AI for social good, particularly for marginalized communities. The discussion showcased how diaspora networks, major philanthropic foundations, and hands-on social entrepreneurs are driving the use of AI beyond commercial applications to address social justice, misinformation, employment, and language inclusivity. Key participants included leaders from the Pat McGovern Foundation— noted as the largest AI funder in the nonprofit sector—Aspen Digital, Kariā, and initiatives focused on bringing local languages into AI. The conversation stressed the importance of nonprofits not only utilizing AI for efficiency but also being active architects in how AI is designed, localized, and embedded in diverse communities. Specific examples illustrated how AI is being deployed to combat deepfakes, support reliable news ecosystems, and empower rural and tribal populations—even unlocking support for tribal languages within tight local timelines. All panelists agreed on the need for shifting the AI narrative towards community dignity, access, and agency, warning against the risks of misinformation and the erosion of trust arising from synthetic media and unsustainable news models.
- Indiaspora, an influential diaspora group, now has chapters in six countries and is convening a global forum in Bangalore next month.
- Session focus: AI in nonprofits—with four diverse panelists representing funding, media, employment, and justice sectors.
- Pat McGovern Foundation has donated over $500 million to AI-focused nonprofit initiatives in the past five years, making them the largest contributor globally in this area.
- Panel perspective: Nonprofits must become not just AI users but architects of AI systems, ensuring local and community-centered design.
- Vivian Schiller (Aspen Digital) highlighted risks AI poses to information ecosystems: spread of deepfakes and synthetic media, and the undermining of news organization business models.
- Alarm raised over public distrust in media due to AI-generated misinformation, possibly leading to authoritarian information control.
- Kariā's project in Nandobar, Maharashtra: In just 45 days, datasets in six tribal languages were collected, models built, fine-tuned, and evaluated entirely by the local community for deployment across health, education, finance, and legal sectors.
- India has over 19,000 dialects; inclusive AI initiatives can significantly raise local empowerment and linguistic representation.
- Nonprofits are building compute capacity, talent, and data resources within communities rather than relying solely on tech industry giants.
- Panel called for a new AI participation model prioritizing dignity, access, and inclusion—beyond profit and engagement metrics.
Faculty Futures 2035: Human-Centred AI Education
The session, moderated by the founding dean of the School of Arts of Science and Technology at University Canada West, focused on the evolving role of faculty in higher education as artificial intelligence (AI) systems become deeply integrated into academic environments. Key collaborators from University Canada West (Canada), Chandigo University Utar Pradesh (India), and Canterbury Institute of Management (Australia) discussed outcomes from the 'Faculty Futures 2035' initiative, which originated as a collaborative inquiry into how institutions can intentionally prepare faculty for increasingly AI-augmented educational systems. The initiative leveraged rapid foresight labs—a structured, time-boxed, rotational workshop format—to engage over 100 faculty and academic leaders in three countries in strategic sense-making exercises, identifying tensions and future pathways for faculty readiness. A central theme was the push for human-centered AI: ensuring that, despite automation, academic judgment, standards, and authority remain a human responsibility safeguarded by coordinated institutional action rather than ad-hoc individual upskilling. The session underscored the importance of aligning individual, collective, and institutional responses to technological change, aiming to create a global roadmap for faculty roles, responsibility, and institutional agency through 2035.
- Collaboration involved University Canada West (Canada), Chandigo University Utar Pradesh (India), and Canterbury Institute of Management (Australia) with over 100 faculty and leaders engaged.
- Faculty Futures 2035 initiative was created to intentionally prepare faculty for AI-integrated higher education systems globally.
- Rapid foresight labs used a unique, time-boxed rotational workshop model with prompts at three levels: individual, collective/professional, and institutional.
- Key focus was on faculty role transformation, professional development as a coordinated system (not just episodic training), and institutional responsibility to maintain academic authority.
- Human-centered AI was emphasized as a design orientation, ensuring academic standards and decision-making remain a human responsibility even as AI mediates knowledge work.
- Strategic horizon mapped to 2035 with three guiding questions: coordination among stakeholders, system redesign, and building future capability.
- Findings will shape an actionable roadmap for global higher education system adaptation to rapid AI-driven transformation.
- Workshops shifted discussion from tool-based adaptation to deep structural change in roles, expectations, and institutional processes.
AI for Economic Growth and Social Good
The panel at the India AI Impact Summit 2026 focused on the transformative potential of artificial intelligence (AI) for India's economic growth and social good, especially in pursuit of the 'Viksit Bharat' (Developed India) vision by 2047. Key sectors discussed included agriculture, manufacturing, banking, governance, education, and healthcare. Panelists highlighted AI's capabilities to address rural challenges, enhance agricultural yields, improve manufacturing efficiencies, enable financial inclusion, modernize governance, personalize education, and extend healthcare access—especially to underserved populations. Practical examples such as Mahindra Group's AI-powered 'Nidan' crop diagnostic app, satellite-driven sugarcane harvesting optimization, and language localization efforts exemplified ongoing innovation. Broad consensus emerged that India's rural and semi-urban sectors must be included, emphasizing AI's real-world applicability through multimodal and multilingual approaches. The discussion also noted the importance of leveraging AI to enhance public service delivery, ensure safety (as in autonomous vehicle features focused on accident prevention), and bridge the 'India and Bharat' digital divide. Finally, the session previewed the significance of cyber resilience as AI is further integrated into national infrastructure.
- Panelists identified agriculture, manufacturing, banking, governance, education, and healthcare as high-impact sectors for AI-driven transformation.
- Mahindra Group demonstrated real-world AI solutions: the 'Nidan' app for crop disease diagnosis and satellite analytics for sugarcane harvesting, covering over 100,000 acres.
- AI-enabled conversational and multimodal technologies are enabling rural users to access agricultural advice and services in native languages and dialects.
- Banking sector priorities include using AI and microdata (soil, land records, crop performance) to deliver financial services to rural populations (where 65% of branches and a large population reside).
- Governance applications range from monitoring urban services (waste collection, encroachment detection, traffic management) to automating citizen feedback.
- Education strategies involve integrating AI and data literacy into curriculums from secondary to higher levels, customizing complexity to students’ stages.
- Healthcare innovations highlighted include AI-powered diagnostics, drug discovery, and rural outreach through conversational AI, crucial for bridging care gaps.
- A recurring theme was the need for holistic and inclusive AI adoption to improve productivity, efficiency, and quality of life nationwide.
- Panelists emphasized addressing the 'India versus Bharat' divide—ensuring AI inclusivity for both urban and rural populations.
- Future opportunities hinge on the development of sovereign AI capabilities and addressing cybersecurity challenges at a national level.
AI Infrastructure for Public Good: Citizen-Centric Services
The India AI Impact Summit 2026 commenced with a high-profile panel featuring key AI leaders and stakeholders from technology, academia, defense, and government. The focus of the session was on the India AI Stack—a comprehensive, sovereign technological framework aiming to anchor national growth, innovation, and pride in the AI era. The panel highlighted the critical need for India to transition from being primarily a service provider to an Original Equipment Manufacturer (OEM) across the AI value chain, emphasizing the importance of ownership at both application and hardware layers. Diverse perspectives showcased practical advancements, including AI for healthcare in low-resource settings, indigenous AI hardware research and production, and sovereign military AI systems tailored to Indian operational realities. This collaborative approach, uniting application builders, technology manufacturers, end users (like the military), and strategic planners, underscores India's commitment to developing a truly independent, robust, and inclusive AI ecosystem poised to serve both domestic and global needs.
- India hosts the world's largest AI expo, reflecting national pride and global engagement.
- The India AI Stack is framed as a strategic multi-layered framework: application (foundational models), hardware, and implementation across sectors.
- India is moving from an AI service-based model to aspiring for OEM status with both software (AI applications, foundational models) and hardware (GPUs, cloud stack, neuromorphic semiconductors).
- Panel highlighted indigenously developed AI solutions for geospatial, defense, and healthcare applications, with successful deployments already in use by stakeholders such as the Indian Army.
- Defense perspective emphasized the urgent need for absolute AI sovereignty—avoiding reliance on foreign technology in critical and sensitive military domains, especially in light of recent geopolitical tensions.
- Academic and industry leaders reported advanced R&D into future building blocks of AI hardware, such as neuromorphic computing, aiming for commercialization in the coming years.
- Government and PSU initiatives demonstrated large-scale AI and geospatial technology deployments for public welfare, urban planning, disaster management, and environmental monitoring.
- Inter-operability frameworks, such as ABDM for health data, were emphasized as enablers for inclusive, explainable, and responsible AI tailored for India's unique challenges.
AI in Healthcare for India
At the India AI Impact Summit 2026, key officials from the National Health Authority (NHA), Ministry of Health, and IIT Kanpur formally launched two landmark initiatives to transform healthcare in India through AI: the sector-specific 'Strategy for Artificial Intelligence in Healthcare for India' (SAHI) and an open AI Benchmarking Platform for healthcare solutions. The SAHI strategy underlines the need for ethical, evidence-based, and inclusive AI adoption tailored specifically for healthcare, emphasizing governance, data quality, capacity-building, innovation, and global leadership. The open benchmarking platform, a collaboration between NHA and IIT Kanpur, will enable developers to test AI healthcare models on real-world Indian data, facilitate third-party validation, and ensure safety, trust, and alignment with Indian requirements. Both initiatives were developed through extensive stakeholder consultation and aim to accelerate responsible AI adoption, improve outcomes, foster innovation, and build global credibility for India as a leader in AI-driven healthcare.
- Launch of the 'Strategy for Artificial Intelligence in Healthcare for India' (SAHI), a sector-specific framework addressing ethics, safety, data, capacity-building, and innovation in AI for healthcare.
- Introduction of a national, open AI Benchmarking Platform—developed by the NHA and IIT Kanpur—to rigorously test and validate AI healthcare solutions on Indian datasets before deployment.
- SAHI aligns with the seven principles of AI governance from MeitY: trust, person-centricity, innovation-friendly, fairness/equity, accountability, explainability by design, and safety/resilience.
- The strategy was developed through nationwide workshops with clinicians, health tech companies, policymakers, and other stakeholders, reflecting deep deliberation.
- Five pillars of the strategy: governance/regulation/trust, data and digital infrastructure quality, workforce/institutional capacity, research/innovation/evidence, and ecosystem for population-scale deployment.
- The benchmarking platform provides trustworthy third-party evaluation for AI solutions, ensuring they are trained on relevant Indian data and are trustworthy, reliable, and applicable in the Indian healthcare context.
- Outcomes aimed: safe and ethical AI adoption, expanded access to quality affordable care, reduced inequities, improved public health system performance, and positioning India as a global leader in responsible AI for healthcare.
- Both announcements mark a decisive shift from experimentation towards scalable, systematic, and safe implementation of AI in India's health sector.
How AI Is Reshaping Education and Learning Outcomes
The panel discussion at the India AI Impact Summit 2026 brought together leading academicians and industry experts to explore the evolving role of artificial intelligence in education, with a special focus on the Indian context. Panelists discussed the transformation of academic institutions towards AI-nativity, emphasizing the need for students to develop action orientation, AI-human collaboration skills, teamwork, effective communication, and cross-domain knowledge. Amit Online highlighted its pioneering efforts in scalable, AI-enabled online education, referencing the massive potential for technology to address India’s growing educational demands, as well as the use of AI for personalized learning, assessment, and cost reduction. The conversation also addressed foundational issues of trust and fairness in AI deployment, with calls for human oversight and ethical, bias-mitigating design by AI developers. Differentiating global approaches, the panel underscored the necessity to localize AI implementations in recognition of India’s language, cultural, and infrastructural diversity. The need for contextualized policies and adequate investment in digital infrastructure was highlighted, alongside the critical role of responsible leadership in navigating AI’s integration into academia. Finally, the panel touched on shifting perceptions and misconceptions about AI adoption among business leaders as organizations accelerate their transformation into data-driven, AI-enabled enterprises.
- Japur Institute of Management is positioning itself as an AI-native academic institution, emphasizing skills in action orientation, AI-human collaboration, teamwork, and interdisciplinary knowledge.
- Amit Online, India’s first university licensed for online degrees, now serves over 200,000 students using AI for personalized learning experiences, assessments, and cost-efficient content creation (10X reduction in development costs).
- AI-powered virtual professors and AI proctored examinations are integral to Amit Online’s pedagogical model, enhancing scalability and academic integrity.
- There is a rapidly growing imperative to serve 40 million new higher education entrants in India in the next 10 years, impossible to achieve via traditional brick-and-mortar campuses alone.
- Panelists cautioned that trust and fairness in AI systems—especially in high-stakes domains like education—should be the responsibility of developers, necessitating human oversight, ethical values, and active bias mitigation.
- Case example cited where AI-based student screening favored English speakers, exposing inherent language and cultural biases in AI models.
- Academic institutions with global footprints, such as University of Southampton, highlighted the critical need to localize AI adoption in India given language diversity, digital infrastructure variances, and differing educational philosophies compared to Western or Scandinavian models.
- Anthropic’s office opening in Bengaluru was noted as a significant move toward AI localization for India.
- The regulatory landscape in India is evolving; effective, contextualized policies for responsible and fair AI use are still in development.
- Industry leaders’ understanding and approach toward AI adoption has changed markedly in the last few years, with an urgent shift from underutilization of data to rapid AI integration.
Collaborating Across Sectors for AI Impact | Sovereign AI, PPP & AI Skilling
This panel session at the India AI Impact Summit 2026 focused on the transformative potential of digital public infrastructure and the pivotal role of public-private partnerships in advancing AI readiness and deployment, particularly in emerging markets. Intel highlighted its ambitious '30-30-30-30' initiative to upskill 30 million people across 30 countries by 2030 through partnerships with 30,000 institutions, noting significant progress in India where over 350,000 students have cleared class 10 exams in artificial intelligence. The session emphasized the foundational elements of AI talent development—mindset, skillset, and access to toolsets—and discussed unique local challenges, such as those in Suriname, with calls for inclusive leadership and regional collaboration. Finally, the discussion underlined the urgent need for organizations to institutionalize AI by establishing empowered Chief AI Officers (CAIOs) with cross-functional authority, strategic oversight, and direct board-level accountability to drive holistic transformation and ensure AI benefits are widely distributed across society.
- Intel's '30-30-30-30' initiative targets upskilling 30 million people in 30 countries with 30,000 partner institutions by 2030 and may achieve the goal ahead of schedule.
- Over 350,000 students in India have completed their Class 10 examination in artificial intelligence as a regular subject, demonstrating early AI education at scale.
- The success of AI skilling hinges on three vectors: shifting mindset, enabling relevant skill sets (beyond coding), and providing access to AI toolsets.
- Suriname's leadership highlighted challenges in leveraging digital technology for national prosperity, emphasizing inclusive development, regional partnerships, and social protection.
- Sustained impact requires bridging public and private sectors, academia, and government (the 'triple helix' model) to tailor technology adoption to local needs.
- Organizations must shift from fragmented AI strategies to institutionalized approaches by appointing Chief AI Officers with genuine authority, budgetary control, and board-level access.
- Every Fortune 500 company—and government agencies in the US and UAE—are expected to create empowered CAIO roles in the near future, moving beyond technocratic or symbolic positions.
AI for Medical Imaging and Diagnostics
The panel session at the India AI Impact Summit 2026 brought together leading experts from the fields of medicine and engineering to discuss the transformative impact of artificial intelligence in Indian healthcare. Panelists, including clinicians specializing in anesthesiology, radiology, oncology, and pathology, as well as AI engineers, presented practical use cases showing how AI is streamlining diagnostic processes, expediting treatment workflows, improving accuracy, and reducing the physician workload. They highlighted projects leveraging AI for chronic pain diagnosis at the primary healthcare level, deep learning for gastroenterology, radiology-based cancer prognostics, and workflow triaging in high-volume pathology. AI is significantly reducing time to treatment, enhancing diagnostic consistency, and optimizing resource allocation. The discussion acknowledged that while clinical decision-support systems are improving, the doctor-in-the-loop approach remains essential. The increasing use of AI by patients for self-diagnosis poses new challenges and opportunities for integrating AI-driven tools at every step of the healthcare journey. The panel asserted AI’s role as both a multiplier of clinical capacity and a tool for ensuring robust, reliable, and individualized care.
- Panel comprised of senior clinicians from leading Indian hospitals and universities, and AI engineers.
- AI projects underway include an ICMR-funded chronic pain diagnosis tool for primary care and deep learning applications for gastroenterology and multiple cancer types.
- 40% of recent oncology radiotherapy department publications are AI-based; a workflow innovation cut assessment time from 3 days to 1 day for cancer patients.
- AI is being used to triage high-volume pathology cases, reducing expert review workload and speeding identification of critical cases.
- Panelists assert AI improves diagnostic accuracy, reduces time to treatment, standardizes clinical decision-making, and enhances workflow efficiency.
- AI is already significantly reducing inter-observer variability in pathology diagnoses, acting like a real-time second expert.
- Decision-support AI in radiology can recommend appropriate investigations, reducing unnecessary tests and patient waiting times.
- AI enables clinicians to see more patients per day, enhancing healthcare delivery capacity.
- Persistent challenges include integrating AI in clinical decisions without replacing human oversight—the doctor-in-the-loop remains essential.
- Patients increasingly use AI assistants (e.g., ChatGPT) for self-diagnosis, shifting the role of clinicians in verification and care.
AI Commons for the Global South: Data, Models, and Compute
The session at the India AI Impact Summit 2026 centered on redefining AI infrastructure as essential civic infrastructure, much like digital public infrastructure, emphasizing that the real challenge in AI adoption is not its deployment but ensuring equitable and meaningful access across society. Panelists brought perspectives from government, academia, global technology firms, and grassroots organizations. The Telangana government's experience demonstrated that merely providing digital infrastructure is insufficient without fostering local agency, user trust, and tangible value. Academia faces unique barriers such as limited access to compute and data, highlighting the need for dedicated academic compute clouds and an open data movement. Big tech, represented by Meta, showcased innovations like AI-powered glasses for accessibility and payments, underscoring the importance of localized data and open-source partnerships. The discussion underscored cross-sector collaboration, with public-private partnerships and inclusivity as critical enablers for embedding AI's benefits at the population scale.
- AI infrastructure is being reframed as civic infrastructure vital for equitable participation and governance, beyond just technical capabilities.
- Telangana's household fiber connectivity program revealed that physical infrastructure alone does not guarantee adoption; local context, trust, and immediate value are crucial for meaningful technology uptake.
- Grassroots digital empowerment was achieved through village-based digital kiosks, often led by women entrepreneurs, supporting digital literacy and local entrepreneurship.
- Academic leaders highlighted a global lag in university access to compute and data resources, advocating for an 'academic compute cloud' for Indian institutions.
- The need for an 'open data movement' is emphasized, urging governments to make key datasets available for research, startups, and public benefit applications.
- Meta outlined a vision for personalized, agentic AI—like AI glasses that assist the blind and low-vision community—which have already launched in India in multiple regional languages and are being used for navigation and payments.
- Meta contributed a 12-billion-token, 4-million-pair dataset in 10 Indian languages to the government's open-source AI library, demonstrating industry support for multilingual, accessible AI solutions.
- Public-private partnerships and open-source collaboration are recognized as foundational for advancing inclusive, large-scale AI adoption in India.
PM Modi warmly receives World Leaders for AI Impact Summit 2026 l IndiaAI
The India AI Impact Summit 2026, hosted in Delhi, is serving as a major global forum for discussions on the equitable and people-centric application of artificial intelligence. Distinguished leaders from around the world—including from neighboring countries like Bhutan and Sri Lanka, the Hellenic Republic, UN officials, and technology industry leaders—are convening with a particular focus on ensuring that AI technology benefits all, especially the underrepresented and developing nations. The summit's central themes are 'People, Planet, and Progress,' reflecting India's commitment to human-centric AI that prioritizes inclusive growth, environmental sustainability, and the empowerment of women and youth. India is showcasing its pioneering digital initiatives in healthcare, education, and digital payments, positioning itself as a model for AI-driven transformation in the Global South. Critical topics include the democratization of data, international collaboration for AI standards, skill development for youth, gender inclusivity, and supporting sustainable development goals through AI. India reiterates its vision for AI as a bridge to reduce global inequality, leveraging its experience with solutions like CoWIN and Amul’s AI-driven women empowerment, while also acknowledging and preparing to address emerging concerns over job displacement and digital divides.
- Summit brings together heads of state from Bhutan, Sri Lanka, Greece, Mauritius, and key international organizations (UN, IMF), underscoring global participation.
- India’s AI Impact Summit is themed on 'People, Planet, and Progress,' emphasizing human-centric, inclusive, and environmentally conscious AI strategies.
- India asserts AI must benefit all, prioritizing equitable access, especially for the Global South and underrepresented communities.
- India highlights its achievement as the only country to pledge net-zero emissions by 2070 with rapid advances in green and renewable energy.
- Digital Public Infrastructure initiatives (e.g., CoWIN platform) and AI-driven programs like Amul’s support for 3.6 million rural women are showcased as models for inclusive growth.
- Major global technology CEOs and companies are actively participating and seeking to expand collaborations with India in AI innovation.
- AI for healthcare is a major focus, with calls for improved data collection, standardization, and context-specific AI models for diverse populations.
- Skill development, especially among youth, and support for professional mobility through international agreements is identified as a strategic pillar.
- Women’s empowerment through technology is highlighted, with a focus on digital inclusion and leveraging AI for social impact.
- Summit positions India as a proactive leader in shaping ethical, accessible, and human-driven AI for global sustainable development.
Day 3: Plenary Hall B | Sridhar Vembu, Jay Chaudhry, CP Gurnani & Global Leaders
The session at the India AI Impact Summit 2026 showcased rapid advancements made by the Sarvam team across speech, text-to-speech, vision, and large language models, all designed for the Indian context and scalable, efficient deployment. They announced a robust multilingual speech recognition system, Sarvam Bulbull—a natural-sounding, production-ready text-to-speech solution with expressive Indian voices—and breakthroughs in document AI with a vision model that can extract, caption, and structure data from highly complex documents, even handling challenging handwritten scripts with near-perfect accuracy. Sarvam’s vision model was benchmarked to outperform international competitors. The team demonstrated real-world impact, such as live-dubbing the Finance Minister’s budget speech into multiple languages in real time for millions of households, helping break down language barriers in education and access to information. The next evolution highlighted was the release of ‘Serv 30B’—a 30 billion parameter, mixture-of-experts large language model trained from scratch and optimized for efficiency, agentic reasoning (tool use), and long-context conversations, including Indian languages, with competitive or superior performance to global models at a fraction of the computational cost. These technological advances enable affordable, scalable AI deployment for education, content creation, and personalized feedback at population scale, with partnerships in place (e.g., with government and educational institutions) for nationwide implementation.
- Unveiling of Sarvam’s low-latency, multilingual speech recognition system with over a billion parameters—capable of real-time, cross-language, multi-speaker operation.
- Launch of Sarvam Bulbull, a production-grade, expressive Indian text-to-speech model optimized for diverse use cases (storytelling, contact centers, local accents, education) and streaming.
- Sarvam vision model built from scratch in 8 months: 3B parameter hybrid model (state and attention layers) trained on 16T tokens, grounded on 400M images, and fine-tuned for high-fidelity document digitization.
- Vision model handles complex document structures (tables, handwriting, multiple languages/scripts), delivers knowledge extraction (including structured JSON/markdown output), and demonstrates near 100% OCR accuracy for historical Indian and English manuscripts.
- Live-dubbing of the Union Budget speech on national TV using Sarvam tech, reaching 45 million homes and making complex information immediately accessible in regional languages.
- Vision model benchmarked to outperform global open-source models (e.g., by Allen Institute) on table detection, multi-column and handwriting recognition, especially for Indian languages.
- Advances in educational AI: Collaboration with DPS school, NCERT, and Ministry of Education to provide handwritten answer recognition and personalized feedback at scale.
- Announcement and impending release of Serv 30B: a 30B parameter mixture-of-experts large language model with only 1B active parameters per token, trained in-house, supporting 32,000-token context windows and outperforming comparably large models on reasoning, tool-use, and Indian language fluency.
- Serv 30B enables highly efficient, long-context, real-time conversational AI—beating international models for both 'thinking budget' and benchmark tasks while slashing compute requirements.
- Overall focus on efficiency, scalability, and affordability to ensure that advanced AI reaches population scale across India.
Aadhaar & Al: The Identity Paradox
The session at the India AI Impact Summit 2026, led by UIDI officials, addresses the evolving challenges and opportunities posed by AI in the realm of digital identity management in India. With the rapid advancement of AI-driven deception technologies such as deepfakes and synthetic biometrics, the once-trusted correlation between physical presence and identity has become increasingly fragile. While UIDI has leveraged AI successfully—for instance, with large-scale fingerprint liveness detection and biometric fraud prevention—the platform faces ongoing challenges related to inclusivity, privacy, trust, and scalability. The discussion emphasizes the need for industry collaboration to address technological gaps, particularly around ensuring models remain unbiased, privacy-preserving, and robust against new threats. UIDI is also exploring sovereign AI models, especially in voice-based authentication and user feedback, and is focusing on language and cognitive load to enhance accessibility and user experience. The organization underscores its commitment to responsible data stewardship, advocating for advanced privacy-preserving model training, and welcomes industry partnerships to develop next-generation identity security solutions.
- AI-driven deception techniques (fake voice, biometrics, facial images) have become industrialized and pose significant threats to digital identity verification.
- UIDI serves over 1.4 billion (140 crore) Aadhaar identity holders, making it one of the largest biometric databases globally.
- Current AI models used by UIDI have improved inclusivity, but challenges remain in enabling access for certain special populations (e.g., those unable to use face authentication).
- UIDI emphasizes strict privacy protocols—data from Aadhaar holders is not freely accessible for AI model training and is protected via data minimization principles.
- AI has enabled population-scale fingerprint liveness detection, considered a recognized industry success.
- Traditional biometric deduplication methods are being supplanted by machine learning models, offering better accuracy but introducing new concerns around bias and security.
- UIDI is leveraging sovereign Indian AI models for voice-based authentication, fraud alerts, and real-time user feedback.
- Language and accessibility are major UI/UX challenges; most users remain in English interfaces despite multilingual support. UIDI aims to address cognitive barriers using multimodal AI solutions.
- UIDI seeks collaboration with industry to build advanced, next-generation models for fraud prevention, privacy-preserving data usage, and greater inclusivity.
- Scale and latency remain critical engineering hurdles for population-scale identity infrastructure.
India’s Creator Revolution: How AI is Democratizing Content for Millions
The session at the India AI Impact Summit 2026 highlighted the transformative role of AI, particularly generative AI, in democratizing India's creative and cultural sectors. Leaders from Google and the Ministry of Culture emphasized a vision where AI serves as an equalizing force that empowers diverse creative communities across urban and rural areas, rather than replacing human ingenuity. Through partnerships, innovative tools such as Google Arts & Culture, and AI-driven initiatives for manuscript preservation and folk art promotion, both tech and government leaders reaffirmed their commitment to opening up creative opportunities, breaking down language and geographic barriers, and supporting inclusive economic growth. Specific economic forecasts—such as generative AI’s potential to add over ₹400 billion to the creative sector by 2035—underscore the immense opportunity for India’s creative economy. The session further showcased collaborative models, practical applications of AI in museums and cultural preservation, and issued a call for wide-ranging partnerships spanning large tech companies, startups, and individual creators.
- Generative AI is positioned as a 'co-creator' that amplifies, rather than replaces, human creativity across India’s urban and rural creative sectors.
- Google announced its commitment to democratizing creative tools, focusing on inclusivity regardless of geography, language, or income.
- Launches and showcases included Google Arts & Culture integrations with Indian heritage and new AI-powered creative tools like VO3, Flow, and Nano Banana.
- A landmark independent study by Public First projects the economic potential of generative AI for India’s core creative sector could exceed ₹400 billion by 2035.
- Examples include AI-assisted translation helping Hindi creators expand global reach by 280% and productivity tools saving creators 390 hours per week.
- Ministry of Culture demonstrated AI’s capabilities in the preservation, translation, and interactive engagement with manuscripts and folk art.
- Partnerships are being actively sought from large tech, startups, creators, and academics to mainstream folk arts and crafts through technology.
- The creation of the new 'Yugi Yugin Bharat' museum will integrate AI for curation, visitor engagement, and collaborative planning.
- Structured public-private partnerships are being encouraged to enhance market access for artisans, promote cultural tourism, and grow the creative economy’s contribution to GDP.
- Both government and Google leaders stressed the importance of responsible, safe, and inclusive AI deployment and called for broad alliances to nurture a 'creative renaissance' for Bharat.
AI Beyond English: Governance and Deployment in the Global South
This session at the India AI Impact Summit 2026 delved deeply into the significant challenges and evolving initiatives around advancing multilingual AI systems, particularly for languages of the Global South such as those spoken in Africa and India. The session began with an analytical overview by the Center for Democracy and Technology, citing empirical research that illustrated major architectural, training, and evaluation gaps in how mainstream large language models (LLMs) handle low-resource and culturally diverse languages. Panelists identified overrepresentation of English in training data, poor-quality or non-existent datasets for many languages, and widespread reliance on Western-centric model architectures as central obstacles. Discussions highlighted active efforts like the Masakhane African Languages Hub’s work building culturally-relevant, multimodal datasets—including text, audio, and image components—for 40 African languages, alongside fostering direct translation capabilities between African languages themselves. Indian experts drew attention to the importance of moving beyond mere linguistic translation towards deep cultural localization in AI training and benchmarking, pointing to the nuances of caste, gender, and socioeconomic contexts often ignored in AI systems trained primarily on internet-sourced content. The session emphasized the necessity of South-South collaboration, intentional data collection, local benchmarking, and the creation of evaluation metrics by impacted communities to drive meaningful progress in equitable, context-sensitive AI development.
- The Center for Democracy and Technology found that most multilingual LLMs remain trained on >60% English data, with non-English content often underrepresented or machine-translated with errors.
- Existing model architectures tend to privilege English or Western language structures, often failing to capture the nuances of less-represented languages.
- Major AI models are often tested using automated benchmarks or machine-translated datasets not created or validated by the communities represented, leading to inaccurate performance claims in low-resource languages.
- The Masakhane African Languages Hub is developing multimodal, culturally-centered datasets (text, audio, images) for 40 African languages, aiming to improve language-to-language translation capabilities within Africa.
- A focused example: Nigeria—with over 200 million people and hundreds of languages—faces internal barriers to mutual understanding that AI-driven multilingualism could help solve.
- Indian AI research highlighted the importance of including cultural context in AI training—language alone is insufficient; aspects like caste, gender, religion, and local customs must also be represented.
- Both African and Indian perspectives stressed south-on-south collaboration and cross-contextual learning as critical to overcoming shared multilingual AI challenges.
AI Research Symposium: The Next Frontiers | Keynotes by Demis Hassabis, Yoshua Bengio & Yann LeCun
The opening session of the India AI Impact Summit 2026 research symposium set a collaborative, globally inclusive tone for the event, highlighting India's leading role in enabling impactful, responsible AI research and deployment. Distinguished guests—including government ministers, academic leaders, and global AI industry giants like Demis Hassabis—emphasized the importance of interdisciplinary research, international partnerships, and focus on real-world, population-scale AI solutions in health, agriculture, climate, and enterprise productivity. Major themes included the rapid ascent of AI toward artificial general intelligence (AGI), the necessity of balancing opportunity with caution and ethical responsibility, and the vital role of upcoming researchers and innovators. The session underscored the summit as both a knowledge-sharing platform and a catalyst for policies and partnerships that address both the promise and perils of advanced AI technologies.
- The research symposium is positioned as a flagship, working platform for AI impact, integrating research, policy, and real-world deployment.
- Over 250,000 mostly young participants (average age under 30) attended the AI Expo, signaling India's youth-driven AI momentum.
- Collaboration with leading academic institutions like IIIT Hyderabad and IITs ensures scientific rigor and platform diversity.
- Keynote speakers included global AI luminaries: Demis Hassabis (Google DeepMind), Dame Wendy Hall, Yoshua Bengio, and Yann LeCun.
- The summit will feature critical dialogues and panels on areas like AI for scientific discovery, frontier research, AI safety, and the Global South.
- Strong emphasis on nurturing emerging research talent and early-career innovators as central to India's AI ecosystem-building.
- The Government of India highlighted population-scale AI use cases in healthcare, agriculture, climate change, and safe, trusted AI.
- International cooperation is seen as essential to maximize benefits and mitigate the risks of AI, especially as AGI nears feasibility.
- Concrete recommendations and policy suggestions for safe, ethical deployment of AI are sought from the assembled experts.
- Demis Hassabis advocated for using AI to accelerate science and medicine, while acknowledging dual-use risks and the transformative power of AI within the next 5-8 years.
Responsible Quantum & AI: Exploring the Security Frontiers
The session at the India AI Impact Summit 2026, convened by ISB and the ISB Institute of Data Science, brought together leading experts to discuss the convergence of Artificial Intelligence (AI) and quantum computing—particularly focusing on responsible governance, security, and trust in the emerging technological era. The speakers emphasized the rapid adoption of AI, with substantial Indian and global uptake, and underscored the transformative potential of quantum computing, especially for problems AI currently struggles with, like traffic optimization and advanced drug discovery. Both technologies promise significant benefits but also introduce new layers of cyber risk, notably with quantum computing's ability to undermine current cryptographic methods. Real-world strategies, such as Google's hybrid encryption and the UK's national guidelines for migration to post-quantum cryptography (PQC) by 2030, were highlighted as proactive responses. The session repeatedly emphasized the importance of industry-wide collaboration, continuous software updates (rather than hardware replacements), and robust policy frameworks to ensure secure, resilient adoption of AI and quantum technologies.
- 78% of organizations globally and 89% of Indian enterprises use AI in at least one business function.
- The global AI market is projected to reach $1.8 trillion by 2030.
- Indian government has launched a National Quantum Mission, funding approximately $750 million.
- The global quantum computing market is expected to be valued between $1.8 and $3.5 billion by 2025.
- Quantum computing is increasingly moving from labs to strategic deployments, with major tech giants (Google, Microsoft, Amazon) actively involved.
- Quantum technologies can dramatically impact fields like traffic optimization, cancer treatment, and finance.
- Existing cryptographic algorithms face imminent threats from quantum advancements, necessitating urgent migration to post-quantum cryptography (PQC).
- Attackers are already harvesting encrypted data in anticipation of future decryption with quantum computers.
- Google recommends hybrid layered encryption and software-based library solutions (like Tink) for scalable PQC adoption.
- Migrating to PQC requires inventorying current cryptography usage, upgrading software, and ensuring compatibility with longer keys in new algorithms.
- The UK government has set clear PQC migration milestones, targeting full migration for government and critical sectors by 2030.
- International collaboration and standardization (e.g., via NIST) are key to forming robust cyber defenses for AI and quantum.
- Industry and government must prioritize responsible adoption, threat modeling, and policy updates to safeguard digital ecosystems.
The Open Revolution: Why Open Source is Key to AI for All
At the India AI Impact Summit 2026, Fabio Vilante, VP and GM of Arduino, emphasized the pivotal role of open-source hardware and software in democratizing AI, especially as AI becomes increasingly embedded in physical devices and systems. He highlighted India's unique strengths—namely its scale, resilience in the face of constraints, and a vast, youthful developer community—and argued these can position the country as a global leader in physical AI innovation. Vilante shared the story of Arduino’s origins as a tool to empower diverse learners and described its evolution to a globally pervasive, open hardware and software platform, now with over 36 million annual downloads of its development tool, making India its second-largest user base. He stressed the critical transition from rapid prototyping to scalable platforms, which can drive sustainable impact. Notably, Vilante announced Qualcomm’s dedicated $150 million fund for early-stage Indian hardware startups, underlining the ecosystem’s shift towards supporting physical AI at scale. He called for Indian educators, students, startups, industry, and policymakers to collaborate, focusing on entrepreneurship, scalable manufacturing, and paradigm shifts in education and investment, to move beyond ‘Silicon Valley imitation’ and build a distinct, scalable model for physical AI suited to Indian realities.
- Open-source hardware and software are essential for democratizing AI and accelerating innovation.
- India's key advantages include scale, adaptability to constraints, and a large, youthful talent pool.
- Arduino's open-source platform has seen over 36 million global downloads in the last year, with India the second-largest community.
- Transitioning from prototyping to building scalable platforms is critical for industry-level impact.
- Qualcomm has announced a new $150 million fund dedicated to supporting early-stage Indian hardware startups.
- Industry players are encouraged to open their platforms for wider education and startup integration.
- Policy makers are urged to streamline pathways from lab innovation to factory-scale implementation.
- Arduino’s mission focuses on lowering the barrier to tech entry by making products affordable and easy to learn.
- Vilante called for a uniquely Indian model for physical AI, rather than imitating Silicon Valley, focusing on solutions at scale and under constraints.
- Educators should foster confidence and project-based learning, enabling students to build before feeling fully qualified.
Trusted AI for Everyone: USISPF Panel on Global AI Impact
The opening session of the India AI Impact Summit 2026, hosted by the US-India Strategic Partnership Forum, emphasized the massive challenge and opportunity of scaling 'trusted AI' for over 8 billion people worldwide, with a special focus on bridging the digital and economic divide between the global North and South. Keynotes and panelists from leading global tech companies, including Microsoft, Zoom, Rubrik, and Unifor, underscored the urgent need for equitable access to AI infrastructure, skills training, and responsible deployment. Microsoft announced major investments in AI infrastructure and upskilling programs for India and the global South, while the panel reinforced the importance of inclusivity, multilingual capabilities, responsible governance, and designing AI solutions that are intuitive and accessible across all socio-economic strata. The session set a collaborative, action-oriented tone for the summit, framing AI not only as a driver of innovation and economic progress, but as a foundational tool whose benefits and risks must be equitably distributed through global cooperation and thoughtful policy.
- AI is rapidly becoming the new operating system for economies and societies, with both enormous promise and risks of widening global divides.
- Microsoft highlighted that at the end of 2025, 25% of the global North's working-age population uses AI, compared to just 14% in the global South, with the gap widening (growth rates: 1.8% North, 1.0% South in late 2025).
- Microsoft reaffirmed a $17 billion investment in India announced by Satya Nadella, with a new commitment to reach $50 billion by 2030 for AI infrastructure in the global South.
- Microsoft is launching a new initiative through Microsoft Elevate to upskill 2 million educators in AI, building on a prior commitment to train 20 million Indians in AI skills.
- Panelists stressed the urgency of equalizing access to AI infrastructure: data centers, connectivity, and electricity are all needed in tandem to empower the global South.
- Multilingual AI and content represent a key focus, with investments in datasets and tools to support the world's linguistic diversity.
- Real-world AI applications in sectors such as agriculture, food security, and productivity are crucial for stimulating demand and economic growth.
- Responsible governance, child protection, and future-of-work considerations are essential for earning public trust in AI.
- Zoom’s design philosophy—‘no hidden logic’—was cited as a model for AI usability, aiming to ensure even non-technical users (like an 83-year-old on a free Zoom account) can access the technology.
- The panel set a clear call for global partnerships between private and public sectors, aiming to leverage both for closing the AI divide and ensuring inclusive benefits.
How AI Can Power India’s Demographic Dividend
This session at the India AI Impact Summit 2026 focused on leveraging AI-driven digital discovery platforms (using the metaphor of 'blue dots') to unlock India's demographic dividend, particularly for youth, women, and people with disabilities. Key stakeholders described practical initiatives in areas like Ghaziabad where AI chatbots and centralized job discovery platforms are bringing together disparate data and stakeholders (industry, government, educational institutions) to create and surface tens of thousands of local job opportunities previously invisible to job seekers. Rather than a lack of capability or aspiration among the Indian population, the panel underscored the core challenge as one of discoverability—connecting unrecognized talent pools with available opportunities through the use of AI, driving greater economic inclusion and participation. The importance of integrating people with disabilities was also emphasized, with commitments to leverage AI platforms for better inclusion. The session highlighted successful pilots and a scalable, interoperable approach that could transform access to jobs, skilling, and welfare schemes on a national level.
- AI-powered 'blue dot' and 'purple dot' discovery platforms are being piloted to map both job seekers and opportunities, making them digitally findable for youth, women, and people with disabilities.
- In Ghaziabad, over 10,000 jobs were surfaced in 60 days via this AI-centered approach, far surpassing traditional job platforms.
- The initiative brought together diverse stakeholders—including MSMEs, government skill missions, industrial associations—into a single digital ecosystem at the district level.
- AI chatbots were used to engage with 30,000–40,000 potential job seekers, leading to the active registration and matching of 10,000 candidates.
- 15,000 people with disabilities ('purple dots') are already registered under UDID in Ghaziabad, positioning the project for substantial inclusive impact.
- The core problem identified is not lack of capability, but lack of digital visibility and discoverability of talent and opportunities.
- The integration opens the possibility for staffing agencies, skilling centers, and welfare scheme implementers to collaborate real-time, enhancing outcome tracking and accessibility.
- Legacy job portals were unable to surface local jobs effectively (fewer than 100 in Ghaziabad vs. 10,000+ by new AI systems).
- The approach is scalable and ready to expand to additional regions and beneficiary cohorts, including further inclusion of people with disabilities.
Safe & Trusted AI: Global Governance and Actionable Frameworks | India AI Impact Summit 2026
This session at the India AI Impact Summit 2026 brought together globally renowned AI researchers and policy leaders to discuss operationalizing 'safe and trusted' AI. Professor Stuart Russell from UC Berkeley emphasized the complexity of aligning AI systems with true human preferences, distinguishing between easily defined toy problems and messy real-world settings that risk misalignment—such as the King Midas problem. He outlined that human values are deeply nuanced and nearly impossible to prescribe in code, but assistance games and mathematical frameworks offer promising ways forward. The panel highlighted current practices like Reinforcement Learning from Human Feedback (RLHF) to convey human preferences, but acknowledged its significant limitations, especially when compared to the vast pre-training data consumed by large language models (LLMs), which may inadvertently encode unsafe objectives. Recognizing the risks of over-reliance on model-level safety, the session advocated for system-level controls—likened to the role of brakes in cars—and strong contextual guardrails at deployment. Australian policy approaches were cited as exemplars, focusing on safety standards for AI deployers and rigorous evaluation to calibrate public trust. Ultimately, the discussion underlined that robust AI safety and calibrated trust require ongoing research, transparent evaluation, and actionable policy, especially critical in populous and diverse contexts like India.
- AI alignment remains a major research challenge, since human preferences are complex and often cannot be fully specified.
- Large language models (LLMs) risk inheriting unsafe or undesirable human objectives from internet-scale pre-training data.
- King Midas problem illustrated: Misspecification of objectives can lead to unintended harmful consequences.
- Reinforcement Learning from Human Feedback (RLHF) is widely used but limited—generally enforces a singular and narrow reward model.
- System-level safety precautions (e.g., AI 'guardrails') are more effective than model-only safety—analogy: reliable brakes for fast cars.
- Australian standards prioritize responsibility for deployers, not just model developers, and require context-specific, system-level evaluation.
- Introduction of the Australian AI Safety Institute as an entity focused on robust, careful AI evaluation to ensure 'calibrated trust.'
- Trust in AI must be evidence-based and domain-specific; over-trusting or under-trusting without proper evaluation is risky.
- India’s context—with a massive and diverse population—accentuates the urgency for carefully calibrated and operationalized AI safety measures.
The New Narrative: Elevating Your Storytelling with Generative AI
The session at the India AI Impact Summit 2026 brought together key voices from big tech, government, and media to discuss AI's transformative potential for Indian newsrooms and content creators. Speakers highlighted new tools like Gemini and AI Studio—presented as personal research assistants and 'CTOs' for content creation—enabling faster and more diverse content production. Google reaffirmed its commitment to supporting India's unique news ecosystem by adapting products for multi-format, multi-language content, and fostering partnerships with major and regional publishers. Government representatives celebrated effective training initiatives and endorsed responsible AI adoption, advocating for systems of transparency, human oversight, and continuous capacity building. The session closed with a call for a broad and sustained collaborative approach, stressing trust, responsible innovation, and ecosystem-wide best practices to ensure AI strengthens rather than destabilizes India's information landscape.
- Launch of advanced AI tools like Gemini (a private, multiformat research assistant) and AI Studio (enabling anyone to build apps, dashboards, and websites rapidly, acting as a 'CTO' for startups).
- Reporters claim AI tools are reducing research and decoding times from eight hours to just five minutes, leading to major productivity gains.
- Google is displaying news from both trusted publishers and creators side-by-side, and adapting products to support India's linguistic and content diversity.
- Economic potential for AI in India's creator sector estimated to exceed 240 billion rupees by 2035, as per a recent report from Public First.
- AI Skills Academy: Journalists from over 110 newsrooms are saving an average of eight hours per week through AI tools.
- PTI's AI-powered tools generate infographics 12 times faster, with social media engagement rising by 566%.
- Prasar Bharti's Akashwani now translates and broadcasts the 'manibat' program in 24 languages simultaneously using AI.
- Government-endorsed Google News Initiative/IMC immersive training saw 100% participant recommendation, underlining its effectiveness.
- Strong advocacy for responsible AI: practical trust architecture, transparent disclosure of AI-generated content, non-negotiable human editorial oversight, and robust verification and accountability protocols.
- Policy push for capacity building, continual journalist training, research, collaboration, and supporting standards/best practices for AI in journalism.
- 'Move from tools to trust'—focus on AI that is transparent, auditable, safe by design, and reinforces the credibility of Indian newsrooms.
The Rise of AI Agents | Ensuring Safety & Inclusion in the Global South
The panel discussion at the India AI Impact Summit 2026 centered on the critical theme of AI safety, particularly regarding autonomous agentic systems across various sectors. The session reflected on the learnings from the previous AI Safety Summit in Kerala, highlighted the findings of a multi-sector AI safety report, and introduced an upcoming report on AI sovereignty for India and the Global South. Panelists explored maturity gaps in AI oversight across sectors such as BFSI, healthcare, education, and public services, noting the urgent need for sector-specific safeguards and oversight. Emphasis was placed on core ethical principles—including transparency, accountability, proportionality, and the rule of law—while warning against delegating legal responsibility to AI systems themselves. The discussion underscored the need for culturally contextualized, regionally harmonized, and sovereignty-conscious AI guidelines, favoring principle-based regulation over rigid European models. The importance of international collaboration (especially among Global South countries) and the inclusion of cultural, linguistic, and sovereign considerations was repeatedly highlighted to ensure safe, trustworthy, and inclusive AI deployments.
- The panel is a follow-up to the Kerala AI Safety Summit (December), announcing an AI safety report covering BFSI, defense, healthcare, retail, and education sectors.
- Findings show BFSI has the highest compliance and oversight, while education and healthcare sectors are lagging in safeguards and governance maturity.
- A new report on AI sovereignty for India and the Global South, based on extensive interviews and surveys, is set for release within two days.
- Six dimensions of AI safety were analyzed: lifecycle monitoring, emergent behavior safeguards, ethical autonomy, recursive governance, regional adaptive governance, and human oversight.
- Education sector noted as a significant area of concern with minimal protective measures against AI-driven deviant behavior.
- Healthcare suffers from high risk due to inconsistent governance, lack of escalation procedures, and automated safeguards.
- UNESCO and OECD key AI principles—transparency, accountability, proportionality, and rule of law—are considered foundational, with a strong stance against granting AI systems legal personhood.
- Calls for regionally adapted, culturally sensitive, and sovereign-compliant AI guidelines, instead of one-size-fits-all regulation.
- Ongoing voluntary compliance in countries like Saudi Arabia; emphasis on embedding ethics and localization by design.
- Preserving language and culture is identified as a critical component in designing and deploying AI systems.
- India’s principle-based AI regulatory approach is contrasted with the EU’s prescriptive framework; advocates contextually relevant regulation that prevents accountability gaps.
Nations & Networks: Balancing Sovereign AI with Global Collaboration
The session at the India AI Impact Summit 2026 featured a candid conversation between representatives from NVIDIA and Akrit V.., the founder of Activate, discussing their upcoming partnership aimed at boosting the Indian AI startup ecosystem. Akrit, an exited founder with significant angel investing experience, shared his entrepreneurial journey, emphasizing the unique technical talent and growing momentum in India's AI landscape. He explained Activate’s focus on investing extremely early in technical founders—sometimes even before companies are formally incorporated—and working alongside them to build world-class AI businesses. Key points included the confluence of increased AI infrastructure availability, government support resembling India's digitalization surge a decade ago, and the need for patience amid both optimism and skepticism. The speakers made clear that with NVIDIA’s Inception program joining forces with Activate, they aim to nurture homegrown AI innovation from the ground up, supporting the next generation of Indian AI startups for the long term.
- NVIDIA and Activate are partnering to support India’s next generation of AI startups, with an official announcement planned for the following day.
- Activate is a venture capital firm led by exited founder Akrit V.. (founder of Haptic), focusing on extremely early-stage investments in technical AI founders—often before formal company incorporation.
- Akrit has completed 107 angel investments and built a deep network of Indian and global AI leaders as LPs and advisors.
- NVIDIA’s Inception program will align with Activate to provide technical and infrastructural support to nascent Indian AI startups.
- India’s AI ecosystem is at a highly nascent stage—described as “day zero”—but benefits from unmatched technical talent and fast-developing infrastructure.
- Recent advances: Increasing availability of AI models (both open and closed source), reducing costs of inference, and growing application-layer innovation.
- Government support for AI is compared to earlier successful digitalization policies (e.g., UPI, digital identity), creating foundational ‘rails’ for the future.
- The prevailing advice: Building AI in India requires patience and long-term commitment, with most breakthrough AI applications in India yet to be created.
- Both optimism and skepticism exist, but the message is to focus on sustained, mission-driven progress for ecosystem development.
France–India AI Collaboration: Ethics, Inclusion & Innovation in Action
The session addressed the complexities and opportunities at the intersection of AI, regulation, and the automotive sector within the context of India-France cooperation. Speakers emphasized that large language models (LLMs) are only one component of AI's potential in automotive applications, with advanced perception models and agentic AI taking center stage for safety and architecture. Key challenges outlined include legacy manufacturing infrastructure, data privacy, interoperability, and the need for secure, sovereign data arrangements. The panel proposed a suite of recommendations to enhance collaboration between India and France: launching a bilateral AI initiative to combine India's talent pool and scaling capacity with France's regulatory expertise and AI safety culture; embracing government-backed frameworks for data sharing and sovereignty; leveraging industry and academia through global capability centers and standardized industrial metaverse initiatives; and fostering talent pipelines via research exchange programs. The regulatory segment of the session contrasted the EU's stringent, complex, and costly compliance-driven approach with India's more pragmatic, principle-based, and flexible regulatory environment. Speakers argued that rather than adopting identical regulatory mechanisms, France and India should pursue regulatory alignment based on shared values of safety, transparency, and inclusivity, thereby creating robust, business-friendly ecosystems that can drive global innovation without regulatory fragmentation.
- LLMs are a visible part of AI but not suited for critical functions like safety in automotive; perception models and agentic AI are prioritized for these roles.
- India and France are encouraged to launch a joint AI initiative, leveraging India's talent and scaling capability with France's robust regulatory and safety frameworks.
- Recommendations span government (new bilateral missions, regulation for sovereign data), industry (transformation of GCCs, standardized metaverse for interoperability), and academia (exchange programs building 'day one AI ready' talent).
- Data privacy and sovereignty are critical; regulatory recommendations advise against unfettered cross-border data movement.
- The EU AI Act introduces a complex, tiered risk-based approach which can be costly and slow for innovation; India's flexible guidelines (seven principles) prioritize trust, safety, and innovation.
- France differentiates itself within EU by supporting innovation and facilitating large-scale AI infrastructure growth through national policies.
- India's IndiaAI initiative positions it as an agile, global AI hub, demonstrating that flexible regulation can coexist with ambitious national strategy.
- France and India, already aligned in joint statements on AI ethics and governance, are uniquely positioned to offer regulatory alignment without full uniformity.
- Key takeaways: regulation shapes capital and talent flows; no single regulatory model fits all; bilateral collaboration can combine complementary strengths for mutual benefit.
AI, Sovereignty, and Cooperation: Shaping the Future Across the Global Sout
The session at the India AI Impact Summit 2026 focused on the critical tension and synergy between AI sovereignty and international collaboration. Beginning with session logistics, the discussion quickly moved to the strategic imperatives facing nations regarding AI infrastructure, sovereignty, and collaborative frameworks. Panelists, featuring leaders from Nvidia, COP 30, CDPI, Miy, and Triille, debated the feasibility and necessity of full-stack AI sovereignty, especially for large nations like India and China, versus efficient domain-specific approaches for others. Nvidia’s perspective highlighted the value of leveraging global AI advancements while adapting them with local data and governance for nation-specific relevance. India's representative detailed a multi-layered national strategy, launched via the 2024 India AI Mission, emphasizing the creation of a sovereign AI stack spanning compute, chips, datasets, foundational models, and applications. Key elements included empaneled service providers offering over 38,000 GPUs as compute-as-a-service, strategic government initiatives supporting chip manufacturing (PLI/DLI), the development of 12 foundational models trained on Indian data, and interoperable approaches to democratize compute access with subsidies. The panel agreed that true sovereignty in AI must be underpinned by robust local innovation ecosystems and global cooperation to address planetary-scale challenges. India’s approach, centered on inclusivity and global good—the themes of ‘people, planet, and progress’—also positions the country as an emerging global compute and application hub, aiming to support both domestic and Global South AI development.
- All sessions at the Bharat Bandapam venue to close by 4:20 p.m., with strict exit by 4:30 p.m. due to security protocols; expo remains open until 8:00 p.m.
- Panel features key participants from Nvidia, COP 30, CDPI, Miy, and Triille, focusing on sovereignty versus collaboration in national AI frameworks.
- It is acknowledged that only the world's largest countries, like India and China, can potentially build full-stack AI infrastructure; other nations should focus on domain-specific, locally-relevant models leveraging global AI advancements.
- Nvidia emphasized that hardware (silicon) is only part of the AI stack—talent, datasets, governance, and collaborative research ecosystems are equally vital for sovereignty.
- India’s AI Mission, launched in March 2024, targets a sovereign AI stack: energy (separate), chip layer, compute layer, dataset/data center, foundational models, and applications.
- Key numbers: 14 empaneled compute service providers have committed 38,000 GPUs, with a further 20–25,000 GPUs to come online; compute is offered as a service through a portal with user registration and up to 40% subsidy on GPU hours.
- India is developing 12 foundational AI models trained on Indian data to address language and cultural specificity, versus generalized global models.
- Government policies, such as PLI/DLI for electronics and chip manufacturing, are crucial parts of India's strategy; legislative support for data and energy layers is underway (e.g., the Shanti bill).
- India aims to democratize AI assets, centralize compute capabilities, and position itself as a 'data center capital' and AI hub, emphasizing support for the Global South.
- Panelists uniformly assert that sovereignty and collaboration in AI are complementary, not contradictory, and that global challenges require cooperative, resilient, and locally-relevant approaches.
AI for Health: Driving Care Innovation for Billions
The session convened top global and national leaders shaping the digital health and AI ecosystem, focusing on India's unprecedented journey to empower 1.4 billion people through digital health infrastructure. Moderated by Dr. Rajendra Pratap Gupta, the panel featured Anish Chopra (former US CTO), Dr. Sunil Kumar Bernwal (CEO, National Health Authority), Dr. Anand Iyer (Chief AI Officer, Weldoc), and Manish (COO, Mayo Clinic Platform). The discussion charted India's global leadership in scaling digital public health infrastructures like ABDM (Ayushman Bharat Digital Mission), the challenges of widespread adoption, and transformative use-cases for AI in personalized care, chronic disease management, and patient empowerment. The panelists highlighted the importance of interoperable, open, and federated data systems, the need for incentives and awareness to drive adoption among providers and patients, and the ambition to translate this scalable Indian model to the global south and beyond. Examples included innovative clinical-AI tools that deeply personalize interventions—emphasizing that the 'blockbuster drug of this century is the engaged patient' supported by evidence-based AI agents. The session crystallized India's digital health model as a global reference point for population-scale, patient-centric, AI-driven healthcare.
- India's ABDM digital health infrastructure is operational, with 860 million ABHA IDs issued but needs to significantly increase the linkage and use of digital health records (currently 880 million records, about one per ID).
- The ABDM system is designed as an open, interoperable, and federated digital public infrastructure, capable of supporting AI-driven healthcare at unprecedented scale.
- Panelists stressed the importance of incentivizing hospitals and reducing friction for adoption, including 'digital health incentives' for facilities creating linked health records.
- India has created the foundational digital health rails; the next challenge is driving widespread adoption, both by professionals and citizens, through awareness and ease-of-use.
- Global leaders (Anish Chopra) see India's model as an opportunity to 'leapfrog' traditional models—advocating the export of India's open standards and information architecture to the global south and beyond.
- Clinical-AI innovation was showcased, especially in chronic disease management, exemplified by Weldoc's AI platform that delivers personalized, guideline-based 'turn-by-turn' interventions (described as 'Google Maps for patients'), resulting in significant improvements in metrics like HbA1c and blood pressure.
- The centrality of the 'engaged patient,' empowered by interoperable data, evidence-based AI, and actionable insights, was highlighted as the key to unlocking population-level health improvements.
- The session marked a rare convergence of experts and systems that have already impacted billions, underscoring India's unique position to set a global precedent for digital, AI-empowered, patient-centric healthcare.
The Future of Finance: From AI Adoption to Autonomous Banking
This session at the India AI Impact Summit 2026 provided an in-depth overview of the dramatic growth and transformative shifts in the data center and AI infrastructure ecosystem, both worldwide and with a focus on India. Supermicro executives highlighted the company's explosive revenue increase (from $4B to $22B in five years) and ambitious future guidance ($40B), as well as its global manufacturing footprint capable of $70-$100B in production. Key differentiators include fully in-house, end-to-end technology capabilities, allowing fast adaptation to rapid semiconductor innovation cycles and modular, cutting-edge data center solutions. The dialogue addressed surging power needs (with India’s data center capacity leaping from 1.4 GW to 7-8 GW by 2030), the necessity of renewable energy adoption, liquid cooling advancements for high-density compute demands, and the challenge of future-proofing facilities amid fast hardware innovation. The session also underscored India's strategic advantage as much of its data center infrastructure is being built new, allowing integration of the latest technologies. Finally, the discussion pivoted to ecosystem collaboration—the importance of developers, industry, and flexible infrastructure to maximize return on unprecedented investments in AI and data centers.
- Supermicro’s revenue grew from $4 billion five years ago to $22 billion recently, with a market guidance of reaching $40 billion.
- In-house end-to-end technology development and manufacturing allows for faster hardware innovation cycles and reduces dependency on partners.
- Supermicro can produce up to 6,000 racks per month, with 3,000 racks supporting advanced liquid cooling solutions; current factory power usage is 63 MW.
- Global manufacturing capacity spans the US, Netherlands, Taiwan, and Malaysia, collectively supporting $70–$100 billion in output.
- India’s data center power capacity is expected to increase from 1.4 GW today to 7–8 GW by 2030, driven by significant public and private investments.
- The Indian government is pushing for an increased share of renewable energy in new data centers (hydro, solar, wind) to limit fossil fuel dependence.
- AI model size is growing exponentially, from 1–1.5 trillion parameters to 6+ trillion today, with future models potentially exceeding 10 trillion.
- Data center racks are advancing from traditional 20–30 kW to 260 kW per rack, with projections of 1 MW per rack within 1–2 years.
- Shift to liquid cooling and DC bus bar power delivery is critical to accommodate new high-performance GPUs (>2,000 W each) and heavy racks (up to 7,000 lb).
- India has an infrastructure advantage as much of its data center build-out is greenfield, enabling faster adoption of modern, modular, and high-density solutions.
- Return on investment and operational efficiency remain top concerns amid trillion-dollar data center investments; developer and software ecosystem is increasingly critical.
CATCH Grant Awards 2026: Recognizing Innovation in AI Cancer Care
The session at the India AI Impact Summit 2026 highlighted India's growing cancer burden, with cases rising from 1.2 million to 1.6 million annually in just five years and mortality rates among the world's highest. In response, the Ministry of Health is rapidly expanding cancer care infrastructure from central institutions to district hospitals. The pivotal theme was the transformative potential of AI in improving cancer diagnostics, treatment, and data management, with government leaders emphasizing 'human in the loop' to enhance productivity rather than replace jobs. The summit showcased the success of large-scale AI innovation challenges—such as the CATCH challenge with over 299 applicants, distilled into a compendium of screening, diagnostic, and treatment adherence solutions— and celebrated both national and international winners across health, governance, and education. Strategic partnerships were announced to establish national frameworks for the evaluation and certification of AI medical tools, marking a significant policy move to integrate validated AI technologies at scale into healthcare delivery.
- India's annual cancer cases increased from 1.2 million to over 1.6 million in five years; the country ranks third globally in cancer burden.
- Mortality rates are alarmingly high, with two-thirds of patients dying before receiving treatment (approximately 1 million deaths last year).
- The Ministry of Health has set up 39 state cancer institutes and is decentralizing cancer care by establishing district daycare centers for treatment.
- AI is being embraced for cancer diagnosis (addressing radiologist shortages and improving accuracy), radiotherapy planning (reducing hours of planning to minutes), pathology, and mammography.
- Recent government AI challenges attracted 15,000+ teams (including international entries) across multiple domains; the CATCH challenge had 299 applicants narrowed down to the top 10, who were recognized on stage.
- A compendium of validated AI solutions in cancer care (screening, diagnostics, treatment adherence, data curation) has been launched and made available.
- The Indian government reinforced a 'human in the loop' approach: AI will augment, not replace, healthcare professionals.
- Five innovation challenge winners were named for governance, climate/disaster management, health, learning disabilities, and agriculture out of 1,000+ submissions.
- Two teams won the AI-based authentication challenge, aiding integrity in public examination systems (notably deployed for UPSC exams).
- A major partnership was announced between India AI, the National Cancer Grid, IIT Bombay, Ashoka University, and ICMR to establish a national mechanism for AI health tool evaluation and certification before deployment.
Empowering Courts with AI: Tools, Insights & Impact
The session at the India AI Impact Summit 2026 focused on the integration of AI within India's judicial system, emphasizing the need for ethical safeguards, risk assessment, and robust data management to ensure fairness, transparency, and public trust. Panelists highlighted the unique challenges of India's multilingual and complex judicial environment, placing particular stress on the importance of balancing AI-driven efficiency with constitutional protections and due process. Capacity building for judges and legal professionals, updates to legal education curricula, and open, transparent technology procurement were consistent themes. Concrete examples such as the launch of Adalat AI’s multilingual WhatsApp chatbot and real-time court transcription tools showcased AI’s potential to improve access to justice, especially for marginalized and rural communities, while underlining the need for ongoing cross-sector collaboration to ensure these advances remain aligned with human rights and non-discrimination principles.
- Assistant Director General opened the session and launched a key policy brief (launch rescheduled to end of session).
- Judiciary's adoption of AI must be carefully categorized by risk (e.g., administrative vs. substantive roles), with tailored audit/assessment protocols for bias, transparency, and explainability.
- Data protection and robust data management are critical, especially given potential exposure of sensitive judicial data in AI applications.
- AI technology in courts should be procured through transparent processes, with external audits to engender public trust.
- Judicial and legal capacity building is crucial; UNESCO and Indian institutions are advancing training programs (including MOOCs) for judges and legal professionals on AI and rule of law.
- Legal education in India is evolving to prepare the workforce for AI-driven justice, with recommendations for multi-lingual education, new curricula on technology and AI, and stronger community legal clinics.
- Panel cited National Law Universities’ shift towards technologically literate law professionals, proposing specialized law-tech honors from the second year onwards.
- Adalat AI is delivering real-world solutions: a multilingual WhatsApp chatbot for case updates and AI-powered real-time court transcription to address backlogs and bureaucratic bottlenecks, particularly benefiting marginalized communities.
- Cross-sector cooperation—among judiciary, academia, tech firms, and the public—is emphasized as essential for responsible AI adoption in justice.
- Overall, there is a call for cautious, inclusive AI adoption that upholds due process, constitutional rights, multilingual access, and reduces the justice gap without eroding public trust.
Predicting the Unpredictable: AI for Weather & Climate Resilience
The session, led by Dr. Karthik Kashinath of Nvidia's Earth 2 initiative, focused on leveraging AI and HPC (High Performance Computing) to address the daunting challenge of ultra high-resolution weather and climate modeling. Dr. Kashinat traced the evolution of the Earth 2 project, inspired by digital twin concepts such as the EU's Destination Earth, with the vision of enabling highly detailed, interactive climate simulations accessible to all. He outlined the technical grand challenges—including the need to emulate planetary systems across ten orders of magnitude in space and time and process exabytes of data—and the corresponding leap in computational requirements. Dr. Kashinat showcased Nvidia’s leading role in this space, such as winning the Gordon Bell prize for sub-kilometer Earth system simulations on Gracehopper GPUs, and discussed the development and deployment of generative AI models ('Climate in a Bottle') trained on petabytes of high-fidelity simulation data from collaborations with major institutes. He highlighted Nvidia’s Omniverse Cloud as a platform for advanced visualization and scenario analysis, and emphasized the growing global shift from research to operational deployment of AI-driven weather and climate intelligence, citing active collaborations with government and private sector partners worldwide including the Indian Meteorological Department. The session underscored AI’s transition from experimental innovation to essential infrastructure for forecasting, policy, and societal resilience against climate impacts.
- Earth 2 initiative aims to build an interactive global-to-local weather and climate intelligence system, inspired by digital twin concepts.
- Nvidia has simulated Earth system models down to sub-kilometer scale using Gracehopper GPUs, winning the Gordon Bell prize at SC25.
- Scaling from 25 km to 1 km resolution in climate models demands 30,000x more compute; to 100m resolution requires 500 million times more compute.
- AI foundation models for climate ('Climate in a Bottle') are trained on over 10 petabytes of high-resolution simulation data from partners like Max Planck Institute, The Weather Company, and MITER.
- Single climate simulations can generate exabytes of data, posing massive data management and analytics challenges.
- Nvidia’s Omniverse Cloud enables advanced digital twin visualization from global to city block scales, used for both scientific and practical applications (e.g., city planning).
- Partnerships extend to global and Indian institutions, including the Ministry of Earth Sciences and IMD.
- AI-powered weather forecasting has rapidly transitioned from experimental research (since 2020) to real-time operational deployment by leading agencies like ECMWF.
- Twelve generative AI models for weather and climate have been released, covering a range of spatial and temporal scales.
Decentralizing Power: Building Regional AI Compute Infrastructure
This expert panel at the India AI Impact Summit 2026 focused on the urgent need to expand India's AI compute infrastructure in a distributed, inclusive, and energy-efficient manner. Panelists highlighted that AI-driven workloads consume vastly more power compared to traditional web search, underscoring the dual challenge of providing sufficient compute capacity while balancing energy and environmental considerations. Dr. Nirj Kumar emphasized lessons from the US Department of Energy’s national laboratory model, advocating for public-private co-investment, regional distribution of data centers, and priority on workforce development. IBM's Anne Robinson outlined the importance of 'sovereign by design' cloud approaches to enable nations like India to retain autonomy over data while ensuring interoperability and preventing fragmentation across ecosystems. Dura Maladi of Qualcomm detailed the necessity of hybrid AI models that distribute AI workloads smartly across devices, edge infrastructure, and data centers, leveraging regional and sovereign models to support India’s diverse linguistic and regulatory requirements. The discussion grounded India's AI infrastructure ambitions in global best practices, technology neutrality, regional innovation, and balancing sovereignty with seamless innovation.
- AI search queries use 25 to 30 times more power than traditional web searches, creating significant energy and infrastructure demands.
- India’s current data center capacity is heavily concentrated in major metropolitan areas; there is a push to distribute compute infrastructure regionally to empower broader AI adoption.
- US Department of Energy’s experience shows that public-private partnerships and distributed national infrastructure are critical for scientific advancement and energy efficiency.
- Investments should equally focus on workforce development, not just technology, to build sustainable AI infrastructure.
- Sovereign cloud models ensure that a country’s sensitive data is subject to local laws and oversight, supporting autonomy and security.
- Hybrid- and interoperable-cloud approaches are central to balancing sovereignty with innovation and preventing ecosystem fragmentation.
- AI compute needs to be intelligently divided between cloud and edge to address latency, privacy, and connectivity challenges, especially in diverse geographies.
- Modern AI models are becoming smaller and more efficient, allowing meaningful inference workloads to be run directly on edge devices, supporting low-latency and localized use cases.
- Language and regional diversity in India drive the need for 'sovereign' and local AI models, ensuring inclusivity and broad impact.
- All panelists stressed that future infrastructure must be inclusive, resilient, and aligned to national priorities through broad stakeholder collaboration.
AI for All: Driving Impact Across Society and Business
The India AI Impact Summit 2026 panel featured leading voices from government, industry, law enforcement, and education discussing India's rapid advancement in AI. Entrepreneurs and policymakers highlighted an unprecedented government-backed investment of ₹10,000 crore (~$1.2 billion) into the national AI mission, including the establishment of significant GPU infrastructure and data centers. The focus is on leveraging India's massive talent pool—5 million developers and 1.5 million engineers annually—to build not just service-based, but product-centric, globally competitive AI enterprises. Policymakers stressed embedding AI into India's digital public infrastructure to drive welfare for all, emphasizing the need to move beyond pilot programs to full-scale, inclusive deployments especially for rural and underserved communities. In law enforcement, AI was described as a transformative 'Iron Man suit' to empower police with data analysis and workflow automation, exemplified by pioneering projects that have dramatically improved conviction rates and operational transparency. The education sector identified AI's potential to offer deeply personalized guidance at scale, moving beyond information dissemination toward actionable decision-making support for every student. Collectively, panelists asserted that India's 'once-in-a-generation' AI opportunities will be realized only if society transitions from short-term transactional mindsets to long-term, ownership-driven innovation both for India and by Indians.
- Government commitment of ₹10,000 crore (~$1.2 billion) for India's AI Mission.
- Procurement of 38,000 GPUs to develop subsidized GPU infrastructure for startups.
- India is producing 5 million software developers and 1.5 million engineers annually.
- 75% national mobile penetration, with 4G/5G connectivity enabling broad AI adoption.
- Predicted 25-30% CAGR for AI sector, with expected $30B market value in next 5-6 years.
- Emphasis on shifting from service-provider to product-builder/owner mentality among Indian entrepreneurs.
- Call to develop India-trained Large Language Models (LLMs) and embed them in digital public infrastructure.
- Law enforcement using AI-powered, locally coded solutions for case management—resulting in a 156% increase in life convictions in one district.
- First-in-the-world automated policing co-pilot system enabling voice-bot lodging of complaints in local languages.
- Policymakers urged scaling and institutionalizing successful AI pilots beyond city and sectoral boundaries.
- AI in education seen as key to delivering personalized, actionable guidance to millions of students.
- Consensus that AI's impact will depend on inclusive, bottom-up participation from talent, industry, and government.
The Farming Revolution: Andhra Pradesh’s AI-Powered Agri-Transformation
The keynote address by Shri Bhuti Raj Khari, Special Chief Secretary (Agriculture, Animal Husbandry, Dairy, and Fisheries) from Andhra Pradesh at the India AI Impact Summit 2026, highlighted Andhra Pradesh’s pioneering role in AI-driven agricultural transformation. He underscored the state’s rapid shift from skepticism to leadership in agri-tech adoption, spotlighting the recent demo of the 'Bharat Vistar' AI platform to Bill Gates at a local banana farm. Andhra Pradesh, with over 60% of its population reliant on agriculture (contributing 33% to its GSTP), has ambitious economic growth targets aligned with the national vision of Vixit Bharat 2047, including a 2.4 trillion economy and a per capita income target of 55 lakhs by 2047. The state is deploying AI across the primary sector: from pest management, demand-driven crop planning, and AI-enabled drone services, to IoT-powered aquaculture and blockchain traceability systems. With 1.8 million farmers practicing natural farming across 2 million acres, Andhra Pradesh leads globally in climate-resilient agriculture, integrating tech solutions for improved productivity, premium market access, and sustainability. Khari emphasized five guiding policy principles (water security, demand-driven production, agri-tech adoption, food processing, and market interventions), while also addressing critical challenges: technological overreach, data integrity, digital divide, and the need for practitioners to prioritize actual ground realities over ready-made solutions. The session aimed not just to showcase best practices but also to share lessons and avoid pitfalls for others seeking to replicate Andhra Pradesh’s advances.
- Andhra Pradesh demonstrated the 'Bharat Vistar' multilingual AI agri-advisory tool to Bill Gates on a local banana farm on June 16, 2026.
- 'Bharat Vistar' integrates agri-stack portals and IC packages, providing image-based pest detection, forewarning, drone booking, and customized advice.
- Over 1.8 million farmers (across 60% of villages) are practicing natural farming on 2 million acres, making Andhra Pradesh the world's largest natural farming state.
- Primary sector contributes 33% to state GSTP (vs. 18% national average), with a target to help achieve a 2.4 trillion economy and a per capita income of 55 lakhs by 2047.
- State government is pursuing five guiding principles: water security (micro-irrigation, moisture management, real-time water resource tracking via Wasar technologies), demand-driven crop planning using AI, agri-tech adoption (including drones and remote sensing), food processing, and robust market interventions.
- Launched advanced platforms like the AP Farmer App featuring AI voice assistants for multilingual, hyperlocal advice and integration with Bharat Vistar goals.
- Utilizing drone-based services across 60,000+ hectares annually via 9,700+ providers, reducing input use by up to 75%.
- Implemented AI-based traceability for crops (e.g., blockchain for global certification) and animal identification (e.g., muzzle-ID 'Go Aadhaar') for supply chain transparency.
- Aquaculture modernization using IoT has yielded 30% power consumption savings; platforms like Aqua Exchange lead this innovation.
- Field parcel monitoring and tree counting via remote sensing enhance precision agriculture (1.44 crore parcels, >2.4 crore coconut trees, >1.4 crore palm trees counted).
- Key implementation challenges include ensuring data integrity, technology designed for actual farmer needs, digital literacy/divide, and changing both farmers' and officials' mindsets from tradition to evidence-driven decision-making.
Powering the Public Good: Aligning Industry, Philanthropy, and Government from India to Africa
The opening session of the India AI Impact Summit 2026 featured distinguished leaders Nandan Nilekani, founder and chairman of Infosys and ASEP Foundation, and Sang Buu Kim, Digital Vice President of the World Bank. The discussion centered on India's experience scaling digital public infrastructure (DPI) and the learnings that can inform equitable AI deployment globally, particularly in emerging markets. Key themes were the importance of inclusivity, replicable 'pathways' for AI-powered public services, the need for guardrails and trustworthy data, and the urgency of bridging the job creation gap—especially for youth in developing regions. Both leaders emphasized shifting from a supply-driven to a demand-driven approach in AI adoption, fostering government investment, cross-country learning, frugal and context-aware deployments, and structured reskilling initiatives to maximize the technology’s benefit while minimizing social and economic risk.
- India's experience with scalable digital public infrastructure (such as Aadhaar and UPI) offers lessons for AI deployment, emphasizing institution-building, policy alignment, and interoperable platforms.
- A new initiative—'100 Pathways to AI Deployment by 2030'—was announced, aiming for targeted, replicable AI use-cases across diverse sectors and countries.
- The World Bank highlights a looming job gap: Of 1.2 billion youths joining the workforce in South Asia and Africa within a decade, only 400 million jobs will be available, making AI-enabled job creation and reskilling critical.
- Generative AI is projected to fully complement 15-17% of current jobs in developing countries, while only 5-7% are considered at risk of displacement.
- Responsible AI requires trusted, curated data sources, institutional oversight, and explicit guardrails to foster inclusion, especially for marginalized communities.
- Concrete, demand-driven use cases—for example, AI in agriculture for farmers in Maharashtra, India and its rapid replication in Ethiopia—demonstrate the value of adaptable playbooks and international cooperation.
- The speakers advocate for increased public-sector investment to stimulate demand for AI solutions, moving beyond infrastructural concerns towards impactful, problem-solving deployments.
- Cross-country knowledge sharing and mutual learning, facilitated by institutions like the World Bank, are designated as critical enablers of equitable AI impact.
FHIBE: Advancing Ethical AI Through Fair and Human-Centric Data
The session at the India AI Impact Summit 2026 focused on the critical theme of responsible AI, especially within the context of data. Shri from Sony Research introduced 'Phoebe,' a groundbreaking, open-source, globally diverse, and ethically collected human-centric image dataset that serves as a benchmark for fairness evaluation in computer vision tasks. The dataset emphasizes consent, diversity, copyright protection, and fair compensation. Panelists from Mastercard and AZB & Partners elaborated on the implementation of responsible AI: Mastercard highlighted a 'security by design' approach, ensuring fairness, transparency, accountability, and compliance in AI systems, particularly in finance. Legal expert Aprajita addressed the growing need for legislative frameworks to ensure accountability, liability, and robust remedies in case of AI-related harms, noting gaps in Indian law and the urgent need for discourse on enforcement and standards across the AI deployment chain. The session underscored the importance of integrating responsibility at every stage of AI system development and deployment, advocating for clearer accountability, stronger legal frameworks, and practical tools for evaluating and building responsible AI systems.
- Sony Research launched 'Phoebe,' the first globally diverse, consensually collected, and fairly compensated open-source dataset for evaluating fairness in human-centric computer vision AI.
- The Phoebe dataset supports nine different vision tasks and contains over 40 richly annotated attributes per image, meeting GDPR compliance.
- Core responsible AI principles discussed were: consent and consent revocation, diverse representation, copyright protection, and fair compensation.
- Mastercard employs a 'security by design' framework for responsible AI, emphasizing fairness (no discrimination by race, gender, etc.), transparency (explainability/audit trails), accountability, security, and regulatory compliance (e.g., EU AI Act).
- Legal perspectives highlighted the lack of clear accountability, remedies, and enforceable standards in Indian AI law for harms resulting from AI deployment, especially with the rise of autonomy in critical infrastructure.
- There is a pressing need for legislative amendments and guidelines in India to address liability and enforcement in AI systems, particularly in scenarios involving multiple vendors and AI diffusion.
- Open calls were made for industry-wide adoption of ethical data collection blueprints and increased awareness around responsible AI.
Beyond Ethics: Operationalizing Responsible AI for Global Impact
The panel at the India AI Impact Summit 2026, focused on 'Responsible AI: From Principles to Proof of Concept', brought together experts from industry and academia to discuss the practical challenges and ongoing efforts to operationalize responsible AI in varied Indian contexts. Speakers highlighted actionable strategies like security and privacy by design, ethical and responsible AI development, extensive real-world testing, and the critical importance of transparency and explainability for end-users. Industry leaders from Mahindra Group and Fortanics shared concrete AI implementations in sectors such as automobiles, agriculture, and finance, emphasizing not just data security but also the need for contextual sensitivity and trust in AI models. Professor Ranjan stressed that beyond technical robustness, responsible AI must actively address systematic biases and incorporate mechanisms for user trust, assurance, and accountability, especially as AI systems begin to influence large-scale decisions. The shared message was that with rising AI adoption, Indian organizations must adopt rigorous operational processes, cross-sectoral learning, and technology-driven security, moving from theoretical principles to tangible, reliable deployments that are both safe and inclusive.
- Mahindra Group anchors all AI initiatives on ethics, responsibility, and security, incorporating 'security and privacy by design' across its diversified businesses.
- AI is leveraged for safety in automobiles (including electric vehicles), with a focus on assisting—rather than replacing—drivers, using AI on-edge for real-time safety intelligence through cameras, LiDAR, and radar.
- 100,000+ acres of sugarcane fields and advisory services in agriculture demonstrate AI’s impact via personalized, satellite-based recommendations to farmers.
- Mahindra Finance applies AI to advance financial inclusion, facilitating credit access with secure, responsible data use, especially for ‘new-to-credit’ populations.
- Implementation of India's Digital Personal Data Protection Act (DPDP) is spurring organizations toward higher data responsibility and robust AI model explainability and testing.
- Fortanics, a Silicon Valley-based cybersecurity firm, introduced 'confidential computing'—now adopted by Nvidia AI Factory—for secure AI deployment, enabling protection of both proprietary AI models and user data.
- Confidential computing is gaining rapid traction, supporting requirements for data sovereignty and privacy in cloud and on-premises environments, which is significant for India's expanding AI infrastructure.
- Professor Ranjan emphasized that operationalizing responsible AI must go beyond data/model security to include transparency, user trust, and systemic bias mitigation.
- Institutionalizing unchecked biases in AI models can unduly legitimize discrimination; AI must be held to higher standards of fairness and explainability than humans.
- Operationalizing responsible AI requires understanding sector-specific risks, error tolerance, and the potential real-world cost of AI mistakes, especially in high-stakes applications like autonomous vehicles and lending.
AI Impact Forum | Breaking the Monopoly on AI Resources
The session at the India AI Impact Summit 2026 convened leading voices from technology, academia, and industry to discuss the holistic development of India's AI ecosystem. Key themes included the responsible co-development of AI by governments and industry with trust and safety embedded from the outset, the importance of heterogeneous and cost-effective compute infrastructure, and the barriers to enterprise-scale AI adoption—chiefly around data accessibility and governance. The panel highlighted the need to democratize AI talent through both upskilling existing workforces and revamping educational curricula to emphasize multidisciplinary skills and problem solving, alongside strong ongoing industry-academia collaboration. Underpinning all progress is the adoption of robust frameworks for transparency, explainability, and regulation to build public confidence in AI systems. Collectively, these steps are integral to India's ambition to realize AI's transformative potential across sectors and geographies.
- Government and industry must embed trust, security, and robust governance into AI systems from the design phase, building on India's digital infrastructure like Aadhaar and Digital Public Infrastructure.
- Heterogeneous compute—combining CPU, GPU, and NPU architectures tailored to workload—delivers scalable, affordable AI access, essential for broad deployment.
- Enterprise AI adoption is impeded not by model capability but by inaccessible or poorly governed data; a shift to 'data-first' architectures and improved data stewardship is critical.
- Democratization of AI talent requires widespread upskilling, making AI tools accessible to all functions (not just specialists), and leveraging AI as a productivity 'exoskeleton' for every worker.
- Transparency, explainability, and ongoing observability—alongside adaptive, non-compliance-centric regulation—are vital for earning societal trust and ensuring safe, reliable AI applications.
- India's educational approach must pivot from rote coding to deep, multidisciplinary problem-solving and systems thinking, with curricula frequently refreshed in collaboration with industry.
- Upskilling and continuous learning are individual responsibilities, enabled by MOOCs and accessible digital content, addressing the perceived gap between available talent and practical skills.
Building Resilient Energy Systems with AI: Innovation Meets Efficiency
This session at the India AI Impact Summit 2026 showcased the rapidly evolving intersection of artificial intelligence with the Indian energy and power sectors. Key contributors discussed the criticality of energy as the backbone of AI, the journey of utilities from technology laggards to early adopters, and the publication of a comprehensive AI use-case handbook detailing 174 global applications—over 40 from India. Speakers addressed the transformative potential of AI in areas ranging from grid operations, planning, and asset management to regulatory simplification and capacity building. However, practical challenges were highlighted, including the lack of standardized use cases, procurement uncertainties, the skills gap between energy veterans and tech-savvy youth, and the need for a unified digitalization roadmap for DISCOMs (distribution companies). The session underscored a strong call for collaboration between generations and sectors, promotion of innovative approaches to procurement, and collective action towards a common digitalization vision, exemplified by initiatives such as new power sector-specific MBA programs.
- Release of a comprehensive AI/ML/robotics handbook in November 2025, documenting 174 global use cases from 35 countries, including 40+ from India, covering all segments of the power sector.
- Handbook incorporates policy frameworks and evolving standards, intended as a living document with plans for future expanded editions.
- Peer review and global industry feedback was integrated, with the handbook being referenced by other summits and recent publications.
- Call to leverage AI in power sector planning (resource adequacy, transmission, and distribution), operations (grid and microgrid management), asset management, resilience, and reliability, especially in rural supply.
- Recognition of regulatory challenges: overloaded regulators, litigation, and complex tariff determination processes—suggestions for transparent, automated AI platforms.
- Procurement for AI solutions is a major pain point; there’s a need for institutional frameworks, innovative contracting approaches, and transparent vendor evaluation for large and small projects alike.
- Emphasis on bridging generational and skill divides—proposed collaboration between retired sector veterans and young AI talent, possibly facilitated by industry organizations and startups.
- The formation of a DISCOM association to co-create a common digitalization roadmap with standardized checklists and guidelines for technical, commercial, and procurement processes.
- Launched an MBA course (in collaboration with IIM Lucknow) tailored for power sector professionals, focusing on digital skills and sectoral leadership.
AI for Good: From Evidence to Scaled Development Solutions
This opening session at the India AI Impact Summit 2026 assembled a distinguished panel featuring the Government of India's Chief Economic Advisor, senior innovation leaders from FAO and IFAD, and the World Food Programme's data experts. The panel, moderated by the UN World Food Program's Global Technology Partnerships Manager, set the thematic focus on moving AI from evidence and pilot phases into impactful, scalable solutions for humanitarian and development actions. Key highlights included India's recent inclusion of a dedicated AI chapter in its national Economic Survey, a strong emphasis on risk-based, proportionate and phased AI policy and regulatory approaches suited to the Indian context, and multi-agency collaboration on data-driven, AI-powered agricultural and food security initiatives. The panelists underlined AI's transformative impact on the speed and precision of scientific research, democratization of information, and the empowerment of both governments and individuals, with a clear message that effective AI depends on quality data, human expertise, inclusivity, and adaptive regulation. Audience engagement was encouraged, and the session promised practical discussions on operationalizing AI at scale for sustainable development.
- The Government of India's Economic Survey included its first-ever dedicated chapter on AI, signaling national policy prioritization.
- India advocates a 'third way' for AI regulation: neither market-driven (US) nor overly regulatory (EU), but a bottom-up, phased, risk-based, and proportionate approach sensitive to labor market concerns.
- Panel brought together leadership from the UN World Food Programme, Food and Agriculture Organization (FAO), and International Fund for Agricultural Development (IFAD): the 'Rome-based Agencies' collaborating on food security tech (SDG2: Zero Hunger).
- Showcased ground-level AI implementations such as 'Farmer Chat' for weather-specific agricultural advice (Life ND Project).
- WFP highlighted its first-ever global data strategy and AI strategy, launched in 2023.
- AI is credited with enabling faster, more precise, and more inclusive research and interventions in agriculture and humanitarian aid.
- Panelists noted AI's potential to democratize information, bridge the evidence-to-implementation gap, and improve evaluation methods, but require sound governance and good quality data.
- Human expertise remains central: AI augments, rather than replaces, human judgment and knowledge.
- Session committed to audience participation, practical dialogue, and showcasing on-ground impact rather than abstract discussion.
How Bharat Is Democratising Artificial Intelligence
The session at the India AI Impact Summit 2026 brought together key government leaders from states including Madhya Pradesh, Karnataka, Telangana, and Odisa to discuss the democratization of AI in Indian governance and its citizen-centric applications. The panel highlighted tangible advancements, such as Madhya Pradesh's use of AI and satellite imagery in agriculture, resulting in increased crop yields and streamlined disaster compensation mechanisms. Odisa emphasized efforts to include underrepresented languages in AI datasets, robust skilling initiatives across all levels of society, and comprehensive outreach via accessible platforms. Both states showcased a commitment to making AI inclusive and accessible, leveraging localization, trust-building through transparent implementations, and public-private-academic collaborations for social impact.
- Madhya Pradesh deploys AI and satellite imagery for crop monitoring and disaster compensation, achieving 95% accuracy in crop identification and a 25% increase in yields over three years.
- Use of AI in agriculture reduces both omission and commission errors, curtailing corruption and improving trust among farmers, evidenced by a significant reduction in compensation fraud and grievances.
- Odisa is building a large Odia language dataset, converting 1,600 texts so far, and will soon launch 'Odia Bhashadan' to crowdsource data from citizens across 13 sectors.
- Voice-based datasets are being created to capture dialectical variation within Odia, enhancing AI systems’ linguistic inclusivity.
- Massive skilling initiatives in Odisa: partnerships with the Vadwani Foundation for upskilling government staff, the 'AI for All' public program, and recent AI education rollouts in 12,000 schools for high school students.
- Collaborative projects with universities, notably the Odisa University of Agriculture & Technology, are driving domain-specific AI data creation—especially in agriculture, health, and disaster management.
- Emphasis on democratization means ensuring access to AI across region, socio-economic background, affordability, and language, creating truly citizen-centric digital platforms.
AI Horizons: Building Safe and Trusted Intelligence Systems
This panel session at the India AI Impact Summit 2026 focused on the equitable distribution and democratization of emerging technologies, particularly artificial intelligence (AI) and data accessibility, across India’s diverse regions. Key speakers included leaders from NILET, IIM Kolkata, and IIT Kanpur, who highlighted initiatives like nationwide AI labs, the creation of India-centric AI educational platforms, and efforts to empower underserved communities. Illustrative projects were shared, such as tools for Indian sign language conversion and regional audio translation, underscoring how context-specific AI can address unique societal challenges. The session flagged concerns about over-reliance on foreign large language models (LLMs), the importance of data sovereignty, and the risks of being relegated to mere consumers in the global AI ecosystem. It also emphasized how AI can transform traditional challenges in education, like large-scale assessment, driving efficiency and transparency while also maintaining space for creativity. The panel advocated for India’s transition from being just a consumer to also becoming a creator in the AI domain, pushing for the development of indigenous models suited for local needs and the promotion of research based on India’s own diverse data.
- NILET has established 56 centers nationwide, reaching even remote regions, and skill trains nearly 1 million individuals annually.
- In the past four months, 28 India data labs have been set up across the country to democratize data access.
- Showcased AI student projects: a tool converting video/audio into Indian Sign Language, regional audio-to-audio language translators (e.g., English/Hindi to Manipuri), and an artisan design tool that drastically reduces time required for product design.
- Launched NILET Digital Industry Platform, India's first digital industry platform for education, highly AI-intensive, which has attracted 50,000 registered students in just four months (currently registering one student every minute).
- All platform data is open for researchers to develop localized AI models (including LLMs), supporting collaboration and innovation.
- Strong warning against dependence on foreign LLMs and the risk of India serving only as a data provider/consumer; urged development of homegrown AI solutions for India’s unique contexts.
- Highlighted AI-based digitized assessments at IIM Kolkata, using both manual and AI evaluation for large-scale examinations—potentially evaluating 3 million answer sheets per semester.
- Innovative acceptance mechanisms included letting students choose between AI/Manual marks and using the mean/highest of multiple AI tools to ensure fairness.
- Leadership called for a shift in India’s AI engagement: from consumption to creation, suggesting indigenous foundational models and policies for safeguarding data sovereignty and economic retention.
AI for 1.4 Billion | Scaling Healthcare Solutions at Population Level
The session at the India AI Impact Summit 2026 explored how artificial intelligence can be effectively leveraged to enhance disease surveillance, outbreak prediction, and healthcare delivery in India. Multiple speakers—including AI leaders, healthcare strategists, and academic pioneers—discussed deployable AI models that can enable real-time analytics at the point of care, even in resource-limited rural and remote settings. However, the session also addressed the persistent challenges in scaling AI within national health systems, such as fragmented data systems, insufficient governance frameworks, limited compute infrastructure, lack of skilled personnel, and the necessity for sustainable financing. Innovative AI-driven diagnostic and screening technologies were showcased, highlighting collaborations between the public and private sectors geared toward embedding AI responsibly and equitably, especially for underserved communities. Institutions like Nvidia are partnering with the Indian government under the India AI Mission to develop sovereign, privacy-preserving foundation models that ensure data remains under national control while addressing multilingual and population-specific health challenges. Startups like Neurode AI are creating frontline diagnostic tools for conditions like epilepsy, with the aim to triage patients at the primary care level and efficiently connect them with specialists. The importance of equitable access, continuous screening, and robust referral pathways was emphasized, with a vision to transform India's vast healthcare landscape through responsible, population-scale AI integration.
- Deployment of AI models and edge computing infrastructure enables real-time, locally contextualized analytics for healthcare at the point of care.
- Scaling AI in national health systems faces challenges: fragmented data, governance/policy gaps, limited computational resources, skills shortages, and insufficient financing.
- Nvidia, as a key partner in the India AI Mission, is supporting the creation of sovereign foundation models, ensuring data privacy and compliance with the Digital Personal Data Protection (DPDP) Act.
- Nvidia's platforms (Nemo, NIM) allow building scalable, multilingual, and privacy-respecting AI diagnostic/analytics tools suitable even for rural and low-resource settings.
- Concrete example: automating doctor-patient conversation transcription for electronic medical records, thereby improving doctor productivity.
- Neurode AI is developing low-cost biosignal-based diagnostic and screening tools for conditions like epilepsy, aiming to have them deployed at the primary healthcare center level for early detection and triage.
- Collaboration between AI tool creators, healthcare practitioners, and program administrators is vital for integrating these innovations within existing government healthcare infrastructures.
- Focus on embedding equity, safety, and continuous screening into AI deployment, with special emphasis on reaching vulnerable, marginalized, and rural populations.
- Existing government digital health platforms (e.g., eJivie, PM-JAY) can act as bridges for connecting AI screening/diagnostic tools with referral and treatment pathways.
The New Digital Commons: Building India’s Open AI Public Goods
This panel session at the India AI Impact Summit 2026 delved into the critical importance of open AI platforms for enabling scalable, inclusive, and impactful digital service delivery across India and the Global South. Featuring representatives from Nvidia, People+AI (Astep Foundation), Digital Green, Abdar AI, and Mozilla Foundation, the discussion revolved around the foundational role open AI systems play in creating public infrastructure akin to electricity or roads. Panelists highlighted how openness—not just at the model level, but across data, tools, benchmarks, and collaborative processes—drives innovation, ensures repeatability, enables trust and safety, and empowers local communities, including millions of farmers. Concrete examples, such as Nvidia’s open-source Neotron models, People+AI’s Mahavisttar initiative in agriculture, and Digital Green’s Farmer Chat app, showcased real-world deployments leveraging open collaboration to build multilanguage, domain-specific, and trustworthy AI solutions. The consensus was that only by embracing open, transparent, and repeatable AI ecosystems can India and similar countries address linguistic diversity, foster innovation, and set scalable precedents for the developing world.
- Digital public services are leveraging open digital platforms to improve accessibility and impact, especially for India's multilingual population.
- Open AI platforms are seen as foundational digital infrastructure, akin to utilities like electricity or roads.
- Nvidia launched open-source Neotron models, sharing not just the models but also the data, recipes, and tools to fully empower community-driven innovation.
- People+AI (Astep Foundation) mandates open source across all its digital public infrastructure (DPI) initiatives, with impactful examples like Mahavisttar—an agriculture project involving 20+ organizations and governmental collaboration.
- Focus on 'diffusion capacity'—moving beyond model development to promote large-scale, shared, and trusted adoption—ensures community benefit and institutional trust.
- Building for repeatability and openness allows stakeholders—builders, students, policymakers—to avoid reinventing solutions and instead focus resources on expanding impact.
- Digital Green’s Farmer Chat reached one million downloads and handled 10 million multilingual farmer queries, highlighting the demand for locally relevant, open, and fine-tuned AI systems.
- Panelists emphasized the importance of open data sets, shared evaluation frameworks, and models fine-tuned for regional languages and domain-specific knowledge to reduce errors and increase reliability.
- Collaborative ecosystems—across civil society, tech companies, governments, and global organizations—are indispensable to ensuring responsible AI deployment.
- Indian solutions and innovations are rapidly scaling and setting benchmarks for the global south, offering templates for other developing nations.
Empowering People: Creating a Purpose-Driven AI and Data Workforce
The session at the India AI Impact Summit 2026 underscored the transformative potential of investing in not just AI technology, but also in people, communities, and ecosystems, with data.org as a central catalyst. The organization has played a pivotal role in convening a network of global stakeholders—including governments, civil society, academia, and industry—to accelerate data and AI solutions addressing major societal challenges across five global hubs. Anchored by both localism and global partnership, data.org’s Capacity Accelerator Network (CAN) has already trained over 150,000 individuals, engaged 9,000+ organizations, and secured 10 government partnerships, with a bold target of reaching 1 million AI practitioners by 2032. Dr. Oie Stewart of Mastercard Center for Inclusive Growth highlighted the importance of strategic philanthropic investment not only in people but also in platforms, organizational readiness, and responsible data governance. Real-world impact was illustrated through stories such as leveraging machine learning for emergency relief in Togo and improving agricultural outcomes in India. The session emphasized that lasting impact relies on closing the gap between the supply of trained talent and the organizational ecosystems ready to deploy them, arguing for an inclusive, collaborative, and purpose-driven approach to AI. Academic leaders, such as those from IIIT Delhi, have responded by embedding interdisciplinary, experiential, and socially relevant curricula into technical training, building partnerships beyond the classroom to ensure that data and AI education is rooted in practical, societal outcomes. The overall takeaway is a call for bold, intentional, and collaborative investment to shape an inclusive AI workforce equipped to tackle the world's most urgent problems.
- Data.org’s Capacity Accelerator Network (CAN) has trained over 150,000 individuals and worked with 9,000+ organizations globally.
- The initiative brings together over 100 partners, including 10 government partnerships, to scale data and AI skilling for social impact.
- Goal: Train 1 million purpose-driven data and AI practitioners by 2032, with a focus on diversity and local context.
- CAN's reach spans five global hubs: United States, Africa, Latin America, India, and Asia-Pacific.
- Projects highlighted include machine learning-enabled emergency relief, reaching 137,000 people in Togo, and predictive analytics for Indian farmers to reduce food waste.
- The growing social sector demand: 3.5 million additional data scientists needed in the next decade.
- Strategic philanthropy is urged to invest not just in training but in enabling infrastructures such as organizational data maturity, governance, and leadership readiness.
- IIIT Delhi's program design emphasizes interdisciplinary teaching, real-world case studies, and partnerships with healthcare and public sectors to ensure AI training translates to social impact.
- The session calls for collaboration across funders, educators, innovators, and public/private leaders to create an empowered, impact-driven AI workforce.
Smart Grids & Green Power: The Future of AI in Global Energy Systems
This session at the India AI Impact Summit 2026 highlighted the convergence of AI and energy infrastructure, focusing on NVIDIA’s cutting-edge work in developing highly efficient, scalable AI factories and the application of AI-driven digital twins and simulation to both AI and energy sectors. The speaker traced the evolution from generative to agentic and now physical AI, emphasizing the growing computational and energy demands of next-generation AI systems. NVIDIA’s annual cadence of hardware innovation, hardware-software co-engineering, and energy-focused metrics (like tokens per watt per dollar) are driving down operational costs while meeting increasing demand for AI capabilities. A major announcement was the use of NVIDIA Omniverse for creating detailed digital twins to optimize the design, building, and operation of AI Gigafactories, enabling rapid iterations and massive reductions in execution errors and time-to-market. The approach is also being extended to modernize the century-old electric grid, using AI for predictive maintenance, real-time grid optimization, cybersecurity, and renewable integration. A key initiative is the Open Power AI Consortium, founded by NVIDIA and EPRI, which fosters domain-specific AI models for the grid, opens data and compute access to developers, and accelerates modernization through a collaborative ecosystem involving utilities, academia, and technology partners. The session underscored the importance of ‘simulate before you build’ across industries and championed a systems-level view, from chip design to grid management, to unlock unprecedented efficiency and innovation in both AI and energy infrastructures.
- AI advancement outlined from perception and generative AI to agentic and physical AI, leading to surging compute and energy requirements.
- NVIDIA’s annual hardware cadence and deep co-engineering reduce the energy needed to generate AI tokens by orders of magnitude.
- The 'tokens per watt per dollar' metric is used to evaluate and maximize AI compute efficiency and cost-effectiveness.
- AI factories (AI Gigafactories) leverage energy (electricity) and data as inputs to produce intelligence, with new architectures including 800V DC racks for higher density and power.
- NVIDIA Omniverse creates digital twins for AI factories, simulating entire build and operation chains to optimize power usage, cost, and reliability before construction.
- Omniverse enables collaborative engineering and parallel design, reducing errors and accelerating time-to-market for AI infrastructures.
- India and global partners are expanding Omniverse-driven simulation from AI factories into the energy sector, modernizing the grid for real-time optimization, renewables integration, predictive maintenance, and cybersecurity.
- Introduction of the Open Power AI Consortium (NVIDIA, EPRI, utilities, academia, major tech firms) to accelerate domain-specific foundation models for the power sector.
- Consortium pillars: open data/model development, providing compute sandboxes, and shared implementation best practices, unlocking ecosystem innovation.
- Emergence of 'physical AI' at the grid edge—autonomous, real-time inference and edge compute for intelligent grid management.
- AI agents are being developed to accelerate traditionally slow processes such as interconnection studies (reducing multi-year delays) and rate case analysis.
Heavy Industry 4.0: Transforming the Global Steel Sector with AI
The opening session of the India AI Impact Summit 2026, hosted by the Ministry of Steel, Government of India, focused on catalyzing collaboration between the country's booming steel sector and the rapidly advancing AI ecosystem. Senior officials—including Secretary Sandep Pondri and Joint Secretary Shrian Kumantraati—underscored India's unique position as the world’s fastest-growing steel market, with consumption rising from 95 million to 152 million tons in five years and substantial capacity expansions underway. The speakers emphasized the transformative potential of AI in addressing sector-specific challenges such as productivity, quality, safety, logistics, and ESG compliance. The newly established Steel Research and Technology Mission of India (SRTMI) was positioned as a strategic single-point interface for startups and innovators to partner with steel companies, thus enabling the integration of cutting-edge AI solutions throughout the value chain. The session set the tone for industry–startup partnerships, pilot project launches, and a commitment to building a digitally empowered, globally competitive steel industry through practical solutions and sustained dialogue.
- India’s steel consumption grew by over 50% in five years (from 95 million to 152 million tons), making it the fastest-growing market globally.
- Steel production capacity is rising rapidly, with India currently adding about 20 million tons per year; 200 million tons present, expected to double to 400 million tons in the next decade.
- Investment of approximately $200 billion anticipated in the steel sector over the next 10 years.
- Nearly 47% of India’s steel is produced by over 2,200 small and medium enterprises, expanding collaborative opportunities beyond large integrated producers.
- AI identified as critical for smart plant operations, mining, logistics, market intelligence, safety, and ESG compliance, moving beyond superficial adoption to tangible economic and societal benefits.
- Steel Research and Technology Mission of India (SRTMI) formally launched as a single-point contact for AI startups and solution providers to interface with steel companies, catalyzing structured partnerships and pilot projects.
- Invitation for participation at the upcoming Bharat Steel International Conference, further integrating automation and AI discussions.
- Digital transformation initiatives already underway, such as Steel Authority of India’s (SAIL) partnership with McKinsey for comprehensive operational modernization.
The Power of Open Data: Unlocking Global Insights with Data Commons
The session discussed India's fragmented data landscape, where vast volumes of public data remain siloed and inaccessible to non-technical users due to disparate formats and complex interfaces. To address this, the speakers introduced Data Commons: an open-source, schema-based platform equipped with open APIs and natural language AI tools that unify public data in a searchable, provenance-rich knowledge graph. By integrating large language models (LLMs) for front-end access and supporting both public and private data overlays, this initiative aims to democratize data access for government officials, MSMEs, and citizens. The pilot implementation, 'data boarding pass,' showcases how policymakers can self-serve insights using everyday language, thereby reducing reliance on scarce data analysts. The panel acknowledged the challenges of data overload, lack of metadata, linguistic diversity, and last-mile adoption, highlighting ongoing efforts like the Bhashini project to incorporate local languages and make AI tools truly inclusive. The overall vision is to propel India past global AI divides—just as it leapfrogged in banking and telecommunications—by building robust digital public intelligence.
- Announcement of 'Data Commons,' an open-source, schema-based platform with open APIs and a common knowledge graph to unify India's public statistical data.
- Integration of AI-driven natural language search interfaces, enabling users to query data directly and receive sourced, chart-based outputs.
- Support for plug-and-play private/public data overlays: organizations can run Data Commons instances on private servers for secure analytics.
- Data Commons now acts as backend for the UN Statistics Group, incorporating data from SDGs, WHO, ILO, etc., thanks to its open-source stack.
- Pilot launched: the 'data boarding pass' tool (data boardingpass.ai) enables natural language queries over ministry data for quick policy briefs.
- Major pain points identified include data silos, lack of metadata, difficulties in aligning disparate datasets, and a bottleneck due to a shortage of skilled data analysts.
- Panel stressed the importance of natural language tools and AI interfaces to reduce the technical barrier for non-data-scientists (e.g., government officials, MSMEs, local leaders).
- Highlighting the digital divide: current LLMs are largely English-centric and bandwidth-heavy, excluding vernacular users and those on the digital margin.
- Initiatives like Bhashini are working to include local language datasets and dialects, partnering with civil society for inclusive AI development.
- A vision laid out: Analogous to India's leapfrogs in fintech and telecom, digital public intelligence platforms like Data Commons could help India leap over the global AI divide.
NeevCloud AI SuperCloud: India’s Answer to Global AI Compute Needs
The India AI Impact Summit 2026 featured a landmark announcement from NECloud, unveiling its renewed identity as an AI-driven transformation partner with a vision to provide affordable, sustainable, and intelligent cloud infrastructure for India. The centerpiece of the session was the formalization of a strategic partnership between NECloud and Agnikul Cosmos, an innovative Indian space technology company, marking a bold step towards launching orbital edge data centers leveraging space-based infrastructure. This collaboration, cemented by the signing of a memorandum of understanding, aims to deliver AI inferencing capabilities from space, addressing persistent challenges such as latency, power availability, physical security, and deployment time that hinder terrestrial data centers. The partnership will enable deployment of data centers in Low Earth Orbit (LEO), powered and cooled by space’s unique environmental advantages, catering especially to latency-sensitive, mission-critical use cases like autonomous vehicles, border surveillance, and near real-time applications. Leaders from NECloud and Agnikul Cosmos emphasized the strategic move as India’s answer to global cloud technology leadership, harnessing indigenous innovation to leapfrog current infrastructure constraints and provide next-generation digital services to billions.
- NECloud officially unveiled a new brand identity and company vision at the India AI Impact Summit 2026, shifting from traditional cloud infrastructure to an AI-focused, transformation-driven model.
- NECloud announced a strategic partnership with Agnikul Cosmos, a leading Indian private space launch company, to collaboratively develop space-based data center infrastructure.
- The two companies signed a formal memorandum of understanding for launching 'orbital edge data centers' in Low Earth Orbit, code-named 'Orion' (Orbital Inferencing Delivery Network).
- These space-based data centers will harness free solar energy and ambient space cooling, addressing high energy and cooling costs that traditional data centers face on Earth.
- The unique space-based approach also provides enhanced physical security and rapid deployment, bypassing terrestrial challenges like land acquisition, power/fiber connectivity, and long construction times (24 months for ground data centers, versus rapid orbital deployment).
- The new infrastructure is designed primarily for mission-critical inferencing workloads—such as autonomous vehicles, unmanned surveillance, and defense—where latency, reliability, and security are paramount.
- Leaders highlighted NECloud's evolution since 2013 and their goal to provide a Made-in-India alternative to dominant American technology giants.
- NECloud emphasized the ongoing development of its proprietary AI Supercloud control plane to integrate intelligent orchestration across terrestrial and orbital infrastructure.
- Agnikul Cosmos brings proven small satellite launch expertise—successfully conducting a fully 3D-printed engine test launch in 2024—and will offer its rocket upper stages as deployment platforms for NECloud's space data centers.
- The announcement signals a concerted move to position India at the forefront of next-generation AI and cloud innovation, with ambitious plans to enable AI inferencing for six billion users globally.
Measuring Advanced AI: Science, Safety & Governance
The session at the India AI Impact Summit 2026 brought together global leaders from government AI safety institutes (from the UK, Singapore, US) and industry to discuss the evolving landscape of AI evaluation, measurement, and governance. The panelists detailed the formation, missions, and collaborations of national AI safety institutes, highlighting the establishment of the international Network for Advanced AI Measurement, Evaluation, and Science (NAMES)—a cross-country partnership that now includes 10 expert organizations. These bodies are shifting their mandates from pure safety toward AI innovation and adoption, all underscored by the shared imperative of rigorous, harmonized model evaluation. The US Center for AI Standards and Innovation (CASE) has evolved from its prior safety-centric role, echoing a trend toward enabling responsible AI deployment. Industry voice Sarah Hooker raised critical questions about the efficacy of current benchmarking practices, the dangers of academic and static approaches, and the unique potential for government to curate meaningful, private test sets and attract top technical talent. The panelists agreed on the centrality of international cooperation for developing measurement standards, addressing region-specific challenges like language and cultural relevance, and ensuring governments possess the scientific expertise to guide AI policy effectively. The network’s joint work, including the open-source release of UKAC's Inspect evaluation platform, stands as a key summit deliverable aimed at global standardization and transparency.
- Formation of the Network for Advanced AI Measurement, Evaluation, and Science (NAMES) uniting 10 national AI safety and measurement institutions.
- Shift of the US AI agency (now CASE) from a focus on safety to fostering AI innovation and adoption.
- UKAI and Singapore AI institutes were both launched in 2024, following major international summits (Bletchley Park, Seoul).
- UKAC's Inspect platform—a standardized, open-source AI evaluation framework—is now available to governments and organizations worldwide.
- Technical AI talent is increasingly being recruited into government agencies, with UKAC reporting 250 staff (100 technical roles).
- Industry concern that most AI benchmarks have become ineffective due to static, academic orientation and model overfitting.
- Government and international networks possess a unique ability to design meaningful, private, and cross-cultural benchmarks—especially for underrepresented languages and evolving risks.
- Joint international model evaluations (e.g., language coverage, data disclosure, cyber risks) are being conducted to harmonize standards and address specific national or regional concerns.
- Recent release of draft NIST guidelines in the US for best practices in automated AI model evaluation.
- Cross-country collaboration is critical to avoid fragmentation and maintain consistent, scientifically-valid AI measurement standards.
Building Inclusive Economies with Open-Source AI
The session at the India AI Impact Summit 2026 focused on the launch of a major policy brief titled 'Advancing Open-Source AI in India: Recommendations for Governments and Technology Developers.' The discussion highlighted the imperative of open source standards in AI, aligning them with digital public good (DPG) standards to ensure technology serves societal and development goals both in India and globally. Speakers, including representatives from India, Germany, and the open-source technology community, emphasized that open-source AI is not just a technical solution but also a tool for bridging AI inequalities, fostering global collaboration, and democratizing technological power. The summit report addresses definitional challenges around openness in AI, encourages definitional clarity to avoid 'open-washing,' and underscores the need for global, community-informed standards. It also acknowledges ongoing challenges, such as infrastructure, resource concentration, and the risk of power consolidation by either corporations or states. The overarching theme was the call for a pluralistic, community-driven, and transparent approach to open-source AI that empowers all stakeholders, including less-resourced communities, to participate and benefit equitably.
- Official release of the policy brief 'Advancing Open-Source AI in India' with recommendations for governments and technology developers.
- Strong alignment of the brief with the Digital Public Good (DPG) standard that includes nine indicators for open-source product vetting.
- Germany and India's collaboration is presented as a global model for AI systems built in the open, promoting global benefit versus tech company concentration.
- World Bank data cited: Only 17% of the world’s population accounts for 91% of AI venture capital and 87% of major AI models.
- WTO estimate: Global GDP could rise by up to 13% by 2040 if the current digital divides in AI are bridged.
- Germany’s Fair Forward initiative has released 16 open AI building blocks for climate action and 55 AI datasets as digital public goods, including impactful use cases in India (e.g., wildfire protection in Goa, crop optimization, public service access in local languages).
- Open-source AI is recognized as essential for democratic technological innovation, transparency, accountability, and strengthening digital sovereignty.
- Speakers stressed the need for clear, community-driven definitions of 'openness' to counter 'open-washing' and ensure trust.
- A call for global, inclusive discussions—beyond a few countries or actors—on open-source AI standards.
- Acknowledgement of persistent infrastructural and resource constraints: current open-source efforts still face dominance by large tech actors.
- Recognition that countering tech concentration requires more than just open-source: it needs competition policy, public infrastructure, and support for decentralized, pluralist developer communities.
The Ethics of Intelligence: Navigating Global AI Policy and Trust
The session at the India AI Impact Summit 2026 covered the evolution of AI-related initiatives, highlighting a 26-year track record in training over 1.3 million students and incubating 100+ startups in India. The organization presented its shift toward AI-driven products, including the creation of unique solutions such as 'Blue LinkedIn' for blue-collar workers, AI tutors, campus management platforms, and advanced regulatory compliance tools. The session transitioned to a panel featuring key European speakers, notably Brando Benifei, a lead architect of the EU AI Act, who clarified misconceptions around Europe's regulatory approach. The panel discussed the balance between innovation and regulation, emphasizing the goal of building trust in AI and avoiding the pitfalls experienced with unregulated social media. Showcased was the practical success of open-source AI innovation in Switzerland, aligned with EU regulatory principles, refuting the notion that regulation stifles innovation. The importance of ethics, governance, and media literacy—supported by responsible AI fellowships for journalists—was underscored as fundamental to public trust and effective global cooperation in AI development between Europe, India, and beyond.
- Over 1.3 million students trained and 100+ startups incubated in India over 26 years.
- Organization has built 50+ AI products including 'Blue LinkedIn' for blue-collar worker résumés and tools for AI-driven campus management, enterprise LLMs, and regulatory frameworks (Policy Ora).
- Introduction of an AI ethics and responsible journalism fellowship (ARI) targeting media professionals, supporting informed and ethical AI coverage.
- Panel discussion led by Brando Benifei (EU Parliament), Daniel Dobos (Swiss AI Standardization), and others focused on building trust in AI through robust regulation, transparency, and open standards.
- Clarification of EU AI Act's purpose: bans on manipulative AI, workplace/study emotion recognition, safeguards for sensitive use cases, and requirements for transparency (e.g., AI-generated content labeling).
- Emphasis on trustworthy AI deployment in democracy, with insights on the need for timely regulation to avoid irreversible societal impact (lesson from social media).
- Showcase of successful open-source, fully transparent Swiss large language models (Apertus) developed within the regulatory spirit of the EU AI Act.
- Consensus that regulation, when thoughtfully implemented, does not necessarily stifle innovation, but rather can foster global cooperation, ethical standards, and sustainable AI adoption.
- Call for stronger India-EU cooperation, focusing on shared values in AI deployment and standard-setting to increase AI adoption while maintaining trust.
Fintech for All | How AI is Revolutionizing Financial Inclusion
The panel on 'AI and Financial Inclusion' at the India AI Impact Summit 2026 brought together experts from banking, technology, and payments ecosystems to explore the real-world transformational potential of AI across India's financial services. The session set the stage by highlighting India's impressive advances in account access and digital payments, exemplified by over 700 million UPI transactions daily, but acknowledged significant work remains in areas like inclusive credit, insurance, and investment. Panelists identified three primary vectors for AI-driven financial inclusion: backend analytics using alternative data for credit and fraud detection, advanced multilingual voice interfaces for financial access, and systemic intelligence enabling local adaptation while safeguarding privacy. Product innovations like NPCI’s Hello UPI (voice-based UPI in multiple languages) and UPI Circle (delegated payments) showcase concrete use cases of AI and voice technologies lowering access barriers for non-smartphone users and non-English speakers. Nvidia stressed the importance of the AI technology stack and the growing agentic automation of all banking functions. Collectively, the discussion grounded optimism in practical challenges and solutions: moving beyond hype to meaningful inclusion through AI’s application across diverse demographics and platforms.
- India now processes over 700 million UPI digital payment transactions daily, representing more than half of all real-time payments globally.
- Over 80-85% of Indian citizens have active bank accounts, but true financial inclusion lags in access to credit, investment, and insurance.
- NPCI’s Hello UPI allows voice-based digital payments in 10 Indian languages, targeting users without smartphones or English proficiency.
- NPCI’s 123Pay and UPI Circle enable phone-based transactions and delegated payments for dependents or household members, advancing financial access.
- AI backend analytics utilize alternative data sources (e.g., UPI transaction history) for more inclusive credit underwriting and fraud detection.
- Advances in multilingual AI, such as small language models, support user-friendly, localized, and private financial services across India’s linguistic diversity.
- Nvidia highlights a five-layer AI stack powering BFSI transformation—spanning energy, hardware (GPUs), infrastructure, models (e.g., chatbots), and applications.
- Despite progress in payments and savings, broad-based inclusion in credit, investment, and protection requires more advanced and customized AI-driven solutions.
Scaling Intelligence: NVIDIA NeMo and the Future of National AI | AI Impact Summit 2026
In this session, Bernard Wyn, NVIDIA's Director of Engineering, provided a comprehensive overview of the advancements in large language model (LLM) training at scale, with a special focus on NVIDIA's open-source initiatives and the development of the Neotron family of models. He discussed the evolution from scaling pre-training to the imperative of post-training—enabling reasoning, efficiency, and adaptability for LLMs, particularly for agentic AI applications. Wyn emphasized NVIDIA's commitment to true open-source AI by not only releasing model weights and architectures but also providing datasets, recipes, and libraries to fully empower the community. He showcased the superior efficiency and intelligence of Neotron 3 Nano (a 30B parameter model) and outlined forthcoming releases (Super and Ultra). The architecture leverages mixtures of experts, activating only a subset of parameters at inference to maximize compute efficiency. The session detailed agentic AI concepts, where AI agents perform complex, multi-step tasks using specialized, smaller models in concert—enabled by scalable, efficient hardware and open software. Wyn also traced the trajectory of LLM post-training, elaborating on the push for 'long horizon' reasoning, and described current and forward-looking methods for aligning LLMs through supervised fine-tuning and reinforcement learning, ultimately targeting increased reliability and autonomy for multi-agent systems.
- NVIDIA is advancing LLM training with a focus on both scaling and efficiency, transitioning from massive, monolithic models to adaptable, domain-specific architectures.
- NVIDIA's Neotron 3 Nano (30B parameter model) outperforms competitive models like GPTOSS 20B and Quen 330B in both intelligence and efficiency, delivering 300–360 tokens/s at lower compute cost.
- Mixture of Experts architecture enables only a fraction of the model's total parameters to be active during inference (e.g., 3B active out of 30B), greatly improving efficiency.
- Forthcoming launches: Neotron Super and Ultra model variants are planned for imminent release.
- NVIDIA's open-source commitment goes beyond model weights, providing full datasets, training recipes, blueprints, and libraries for total reproducibility and customization via GitHub.
- Agentic AI is a major focus, where AI agents can reason, use tools, and cooperate to accomplish complex, multi-step workflows, including deep research, video summarization, and personalized planning.
- Smaller, specialized models are promoted for agentic AI, enabling cost-efficient, domain-specific agents that maintain IP control and data sovereignty.
- Post-training and alignment techniques have progressed from simple text completion, to conversation, to multi-step reasoning and tool use (now targeting even longer, more reliable reasoning chains).
- Modern LLMs have increased in task horizon scope—now able to handle tasks spanning multiple hours for humans, with a move towards reliably executing tasks that would take days or weeks.
- Alignment strategies are evolving, with an increased emphasis on reinforcement learning (RL) for reasoning quality, self-correction, and safety.
How AI Will Reshape Global Development Beyond the SDGs
The first session of the India AI Impact Summit 2026 town hall emphasized the urgent need for AI to drive not just technological advancement, but meaningful, inclusive, and equitable impact in areas such as disaster governance, health, education, agriculture, and financial inclusion. Speakers highlighted the reduction in disaster-related mortality rates while underscoring the ongoing losses of housing and livelihoods, urging AI developers to focus on transformative solutions addressing root problems. Discussions spotlighted the structural inequities in innovation and IP ownership, calling for responsible scaling, public investment, and democratized access to AI resources. Tangible examples were offered, from large-scale fintech inclusion to rapid agricultural advisory deployments. The panel argued for designing with inclusion, ensuring human agency, upskilling the workforce, and prioritizing local relevance. Rejection of 'AI trickle-down' models was voiced, instead advocating for ecosystem-wide empowerment and collaborative, rights-based approaches rooted in public good, not just private gain. The session concluded with a call for all stakeholders, including governments, civil society, and industry, to commit to building AI pathways that empower the many, not just the privileged few.
- Mortality from disasters has decreased by 50% in recent decades, but economic and housing losses remain high, requiring deeper AI-driven transformation.
- 70% of global patent filings come from large companies, with 85% concentrated in five regions, leaving startups and emerging economies disadvantaged.
- Over 1 million students and researchers in India, and more than 100 universities, have benefited from affordable AI innovation tools built on open-source, fine-tuned models.
- AI can collapse barriers of cost, expertise, and scale, but only if inclusion is designed from the start across sectors like healthcare, agriculture, education, and research.
- Governments should provide subsidized compute, support open/lightweight models, invest in quality datasets, and nurture AI talent to support startups and inclusivity.
- Human judgment, context, and the need to upskill and reskill the workforce must remain central as AI is deployed.
- World Bank is promoting 'small AI': practical, affordable, locally relevant solutions, advancing deployment before all prerequisites are met.
- Time to launch AI-driven agriculture advisories has reduced from 9 months (Maharashtra) to 3 weeks (Amul), evidencing an exponential diffusion curve through shared knowledge.
- The goal is to establish 100 collaborative AI diffusion pathways by 2030, enabling scalable sectoral impacts globally.
- Over $0.5 billion has been invested by the Patrick J. McGovern Foundation in AI for good, focusing on ecosystem empowerment rather than spotlighting a few pilots.
- Public investment and agency are needed to match private sector AI development, anchored in rights-based frameworks, norms, and democratic, public participation.
- Fintech players like Mobyquik use AI to tackle exclusion in India, focusing on ease of use, local language accessibility, building trust, and promoting financial literacy among 180 million users.
Israel’s AI Model: Building a Better Future Through Innovation
The session opened with the Israeli ambassador to India highlighting the deep, trust-based partnership between the two nations, particularly in leveraging AI to address 21st-century challenges and socioeconomic advancement. Brigadier General (Res.) Eres Ascal, head of Israel's new AI Directorate at the Prime Minister's Office, then outlined Israel's ambition to be among the global top three in AI by building real-world solution labs, prioritizing AI safety and cybersecurity, and developing edge AI systems adaptable to sectors from healthcare to defense. Dr. Victor Alanes presented cutting-edge work from Israel’s Volcani Institute on AI-driven precision agriculture to reconcile sustainability with food security, emphasizing technology’s role in optimizing inputs and enabling data-driven, farmer-accessible solutions. Education innovation was spotlighted by Mav Zib from the Ministry of Education: Israel’s AI education strategy focuses on personalized learning for all students, advanced AI curriculum development, a unified digital platform (the 720 project) featuring personalized AI agents for students, teachers, and school leaders, as well as regulatory frameworks and sandboxes to scale and manage AI integration responsibly. Throughout the panel, a strong emphasis emerged on Indo-Israeli collaboration, real-life problem-solving, and the cross-sector potential of AI for improving security, sustainability, education, and governance.
- Israel aims to be among the global top three countries in AI, guided by a new national AI directorate under the Prime Minister’s Office.
- Israel’s three-pronged AI strategy includes development of real-world solution labs (with 50 verticals), AI safety and cybersecurity, and edge AI solutions for direct real-world implementation.
- Israeli AI efforts emphasize cross-sector applications, from defense and cyber to agriculture and education, leveraging both military and civilian expertise.
- Precision agriculture through AI at the Volcani Institute targets both sustainability and yield, utilizing deep learning, computer vision, digital twins, and crop-specific large language models.
- Education transformation in Israel includes AI-integrated curricula for all 2.3 million students, tiered AI competency development, and ‘AI month’ engaging over 70% of students in a single month.
- The '720 project' provides a unified, personalized digital platform with AI agents tailored for students, teachers, and principals, covering learning, progress monitoring, and educator support.
- Infrastructure and access innovations include AI learning bots from 4th grade up and dedicated tools for older students, coupled with a regulatory framework and sandboxes for safe, national-scale AI rollout.
- Barriers to AI adoption in agriculture and education (legal, social, financial, and trust-related) are being explicitly addressed.
- India-Israel collaboration is positioned as vital, with invitations for joint projects and knowledge exchange across both public and private sectors.
Inclusive AI in India: Making Technology Accessible to Everyone
The session at the India AI Impact Summit 2026 focused on the critical theme of democratizing artificial intelligence (AI) in India, emphasizing that making AI accessible, affordable, and inclusive is both a technological and developmental imperative for the nation. Distinguished government, industry, and academic leaders agreed that true democratization involves more than widespread technology adoption; it requires multi-faceted efforts spanning governance, open data, infrastructure, capacity building, language inclusion, skill development, and responsible, ethical deployment. Key government initiatives such as the India AI mission, Bhashini for multilingual inclusion, future skilling programs, and the expansion of digital public infrastructure were highlighted as foundational to India’s strategy. The central message was that India's experience in building large-scale, inclusive digital systems positions it to serve as a model for the global south, though challenges regarding cost, talent, data quality, and ethical oversight persist. The session called for urgent, collective action from government, industry, startups, academia, civil society, and international partners to ensure AI serves as a tool for empowerment and equity, rather than exclusion and increased inequality.
- AI democratization is framed as a developmental imperative, not just a technological one, for India and the global south.
- India AI Mission aims to provide shared national AI compute infrastructure to prevent prohibitive cost barriers for startups and researchers.
- Government prioritizes creation of high-quality, anonymized, and consent-based public datasets, with strong data protection and privacy safeguards.
- AI skilling initiatives are underway to prepare youth nationwide, especially from tier 2 and 3 cities, for an AI-driven future.
- Bhashini program spearheads language inclusion by enabling AI-powered technologies in hundreds of Indian languages.
- Successful digital public infrastructure examples cited: Aadhaar (over 1.4 billion identities), UPI (12 billion transactions monthly), and CoWIN.
- AI deployment in India already benefits precision agriculture, healthcare diagnostics, adaptive education, and urban mobility.
- India’s approach to AI emphasizes responsible, ethical development with transparency, fairness, and accountability.
- Model stresses balancing open innovation and global interoperability with national data sovereignty and strategic autonomy.
- Persistent challenges: high compute costs, limited high-quality domain data, need for continuous AI talent upskilling, ethical framework evolution, risk of algorithmic bias.
- Session called for collective action: government, industry, startups, academia, civil society, and global partnerships all play key roles.
Trusted AI for Nations | Building Ethical Public Sector Frameworks
The session at the India AI Impact Summit 2026 highlighted the transformative potential and critical importance of trust-centric, people-first, and open-source-driven AI for public sector advancement in India and the global south. Speakers emphasized that AI is now foundational to public digital infrastructure, urging that trust, human accountability, and ongoing oversight must be embedded by design—not as afterthoughts. UN-partnered platforms like AIHub and India's unique digital public infrastructure (DPI) were spotlighted for facilitating responsible, scalable, and inclusive AI adoption, integrating robust governance, technical safeguards, and AI literacy. Real-world Indian examples demonstrated how AI is already improving healthcare delivery (with voice-to-text tools for frontline health workers and multilingual immunization chatbots), agriculture (AI-powered tools for crop planning, pest surveillance, and insurance), and climate resilience. The summit underscored India's leadership in making AI widely accessible as a global public good, via open-source models adaptable across diverse, underserved communities, and outlined the strategic goal of positioning responsible AI as an accelerator for equity, resilience, and sustainable development across the global south.
- Trust and accountability are fundamental to scalable public sector AI—requiring governance, technical safeguards, and ongoing oversight from the outset.
- Human readiness and AI literacy, rather than just technology deployment, are essential for sustainable impact and adoption.
- UNIC's AIHub delivers shared experimentation platforms, open-source toolkits, and common governance frameworks across all UN agencies.
- Open-source AI models—including those developed in India—are deliberately prioritized for maximum inclusiveness and adaptability.
- India's digital public infrastructure (DPI) is a global benchmark, enabling affordable, interoperable, and language-diverse AI-powered services.
- India’s AI-driven health interventions include voice-to-text tools for nearly one million health workers and a multilingual chatbot for immunization guidance in underserved communities.
- AI solutions like Bharat Vistar, YES, and CropIC support farmers with real-time, mobile-first crop advisories, pest surveillance, and faster crop insurance claims, boosting rural resilience.
- AI-enabled early warning systems are being piloted to integrate animal and human health data for proactive disease outbreak management.
- UNDP's strategic plan for 2026-2029 positions responsible, people-centric AI as a pillar for human development and planetary health.
- India is shaping a new global standard for people-centered, open-source AI as a global public good, particularly benefitting the global south.
How AI Is Redefining Indian Pharma for Viksit Bharat 2047
The session 'From Volume to Value: Role of AI in Redefining India's Pharma Leadership for Viksit Bharat 2047' focused on India's pharmaceutical industry's strategic pivot from being the global leader in generic medicines by volume toward becoming an innovation-driven value leader by 2047. Moderated by Priyanka, the panel brought together industry veterans and new AI research leadership—including Dr. Shahul Patel (Zidus Life Sciences and IPA), Vinslow Tucker (Lilian India), Dr. Amit Sheth (India AI Research Organization, IRO), and Mr. Sudarshan Jain (IPA). Dr. Sheth introduced IRO's mission to build sovereign, world-class AI talent and foster AI-driven pharmaceutical innovation in India, likening IRO’s ambition to that of ISRO in space. Key opportunities highlighted included rapidly scaling AI talent in-country, developing enterprise-wide AI solutions, creating IP and knowledge-driven tools for drug discovery, clinical trials, regulatory compliance, and manufacturing optimization. Industry voices emphasized moving past isolated pilots to transformative, enterprise-scale AI deployments with clear short-term and long-term productivity gains. Policy signals from the latest Union Budget and reinforced government focus create fertile ground for these shifts, positioning pharma as the flagship sector for India's ambitious AI innovation drive.
- India supplies ~20% of global generic medicines and ~60% of vaccines to over 190 countries.
- India’s Viksit Bharat 2047 goal is a $500 billion pharma industry, shifting focus from volume (generics) to value (innovative/biologic drugs).
- Government’s Union Budget increased allocations and policy support for bio-pharma, biologics, innovation, R&D, deep tech, and AI.
- India AI Research Organization (IRO) was founded as a sovereign institution to build world-class AI talent domestically and drive original AI IP.
- IRO–IPA collaboration prioritizes pharma as the flagship sector for developing Indian AI capabilities.
- IRO aims to quickly develop top-tier AI researchers in India—with founding faculty including returnee Indian talent from global institutions.
- IRO focuses on moving from prototypes to products, supporting startups with seed/growth capital and co-development.
- Specific AI opportunities outlined: pharma knowledge graphs for R&D acceleration, molecular design via diffusion models, clinical trial data quality, manufacturing process optimization, predictive maintenance, and regulatory compliance via document automation.
- Zidus Life Sciences plans enterprise-wide, not piecemeal, AI implementation—targeting regulatory documentation, quality, and manufacturing with productivity gains of 30–50%.
- Industry acknowledges AI’s main short-term value in compliance, documentation, and manufacturing automation, with R&D breakthroughs seen as longer-term.
- Panel emphasized collaborations across industry, startups, academia, and public policy to accelerate AI-driven pharma innovation.
Bharat’s Sovereign AI: Scaling Innovation for a Billion+
The session at the India AI Impact Summit 2026 centered on the critical theme of AI sovereignty within the evolving IndiaAI mission. Leading voices from Tata Communications and industry AI startups underscored comprehensive sovereignty, not merely in data, but also model, infrastructure, and hardware domains. Panelists articulated India's strategic shift from basic data localization towards intent-driven technological independence, emphasizing the need for heterogenous, energy-efficient, and easily programmable hardware to support scalable, secure, and border-independent AI. The discussion recognized network infrastructure as the 'invisible layer' vital to scaling AI beyond urban centers, and advocated for federated learning and differential privacy as mechanisms for collaborative AI development within the Global South, allowing India and its partners to share model advancements without compromising citizen data. The conversation reflected confidence in recent policy shifts, capital influx, and operational readiness to rapidly scale domestic AI compute, marking a decisive move towards building a resilient, sustainable, and globally relevant sovereign AI ecosystem.
- AI sovereignty now encompasses data, models, compute infrastructure, and software control—relying on intent and not just compliance checklists.
- India's approach calls for ownership at every tech stack layer: data centers, hardware, management platforms (including control planes), and models.
- Panelists emphasized moving from hardware dependency to heterogenous hardware adoption, aiming for democratized access, energy efficiency, and low-latency performance.
- Operational bottlenecks like capital availability and policy have largely been alleviated recently—India can now rapidly scale AI compute capacity.
- Network connectivity is essential to bringing AI to population scale, particularly in tier-2 and tier-3 cities, while maintaining security and interoperability.
- Advocacy for federated learning and differential privacy allows collaborative AI innovation with the Global South, leveraging local data but sharing model weights.
- Open source platforms (OpenStack, Kubernetes) are being increasingly adopted to avoid reliance on proprietary foreign software.
- Recent government policies around data center operations and clear taxation are attracting global investments and supporting AI infrastructure expansion.
- Edge computing and scalable cloud solutions are being deployed to enable rapid, real-time AI services across the country.
Fueling the Revolution: Democratizing Compute for AI Startups & Economic Growth
The session at the India AI Impact Summit 2026 brought together key voices from Nvidia, venture capital, and startup founders to discuss the evolving AI and deep tech ecosystem in India. Nvidia outlined its comprehensive developer-focused platform and Inception program supporting Indian startups at every stage, including training via the Deep Learning Institute, technical mentorship, access to capital, and market access. The conversation highlighted how India is now building AI capabilities in tandem with global peers, unlike previous technology cycles. On the investment front, India's AI and deep tech funding landscape is transforming through unprecedented commitments: the government’s 100,000 crore ($12.5 billion) RDIF for research and innovation, a $2.5 billion pledge over five years by the India Deep Tech Alliance (with at least $1 billion for AI), and strategic collaborations with global giants like Nvidia, Micron, and Qualcomm. Deep tech investing in India now hinges on defensible technology and rigorous team capability, given long development cycles and high capital needs. While funding and talent are robust, a recurring concern is the relatively slow domestic enterprise adoption of AI—prompting calls for greater internal AI adoption across traditional sectors to prevent talent and startups from focusing solely on export markets. Collectively, these developments underscore the rapid maturation and global integration of India’s AI ecosystem and the coordinated efforts needed to ensure its long-term impact.
- Nvidia’s Inception program in India now supports all startups under 10 years old, providing free resources—training, developer tools, deep learning courses, technical mentorship, and potential access to Nvidia’s investment arm.
- Nvidia confirmed its commitment to India by expanding developer tools across domains (autonomous vehicles, healthcare, physical AI) and fostering inter-startup collaboration.
- India’s Government has launched a 100,000 crore ($12.5 billion) Research Development and Innovation Fund (RDIF) to accelerate deep tech and AI development.
- The India Deep Tech Alliance (IDTA), a coalition of VC firms and industry leaders, has pledged $2.5 billion for deep tech over the next five years—including $1 billion specifically for AI startups.
- Strategic partnerships established with corporate giants (Nvidia, Applied Materials, Micron, Qualcomm, L&T, CG Power) to facilitate founder mentorship, lab access, customer connections, and potential acquisitions.
- Investment in Indian deep tech differs from traditional tech: focus on defensible, truly innovative technology and teams able to execute over longer cycles with high capital needs.
- AI innovation in India is now happening on a global timeline—narrowing the historic gap seen in SaaS adoption—leading to world-class products launched by Indian founders.
- Indian AI startup financing is strong at early stages, but previously lacked adequate growth capital—now addressed by government and VC commitments.
- A persistent challenge: relatively low domestic enterprise AI adoption, prompting efforts to drive internal AI integration across traditional sectors to maintain India’s innovation momentum.
Panel AI at Scale
The session at the India AI Impact Summit 2026 explored the transformative journey and key challenges of deploying artificial intelligence at scale in India and comparable emerging markets. Executives from leading tech and telecom companies shared insights on building world-class recommender systems and AI infrastructure adapted to local economic realities, notably reducing cloud costs and ensuring algorithmic efficiency to suit lower ARPU environments. The discussion emphasized the importance of developing homegrown AI talent and technology, advocated for sovereign AI platforms to preserve linguistic and cultural heritage, and highlighted the strategic shift from vertical to horizontal organizational structures to enable scalable AI adoption. Both speakers underlined the growing role of generative AI and LLMs in content and ad personalization, how telecom innovation like AI-enabled RAN and grid architectures bridge urban-rural divides, and the imperative of democratizing AI in a sovereign, scalable, and cost-effective manner tailored to India's massive and diverse user base.
- First multitask deep learning model deployment in 2022 increased user time spent per DO by 40-50%, directly impacting revenue.
- Cloud infrastructure costs reduced from over $100 million annually in 2022 to one-third through algorithmic and system efficiency gains.
- AI/ML R&D teams are now about 50% India-based, with local talent reaching parity with global standards after sustained investment.
- Current recommender system performance is approximately 90% of TikTok and Meta standards, but optimized for vastly lower ARPU environments—around $2 versus $100 for global peers.
- Telecom AI initiatives led to ARPU growth of 14% (versus industry average of 7%) and reduced churn from 3% to 1.4%.
- Hyperpersonalized connectivity and edge AI (AI on RAN, AI grid) are enabling low-latency intelligence delivery to over 80,000 villages in Indonesia—a model advocated for India as well.
- Transition from AI RAN labs to distributed AI Grid expected to exponentially scale low-cost, real-time intelligence and empower rural populations.
- Speakers stressed the necessity of sovereignty and localization in AI: building indigenous LLMs and recommender engines to retain control over data and cultural assets.
- Generative AI is multiplying content creation capabilities, lowering barriers for user-generated content and complicating content relevance; strong, efficient recommendation systems are now critical.
- AI's role in advertising has evolved from click-optimization to deep funnel conversion, increasing revenue efficiency and relevance for both users and advertisers.
- Cultural and organizational change towards horizontal collaboration and reskilling is vital for successful AI scaling in traditional sectors like telecom.
- Sovereign, application-driven AI infrastructure is essential for India to maintain economic competitiveness and ensure AI benefits reach all social strata.
Press Conference - Guinness Record Announcement | AI Impact summit 2026
The India AI Impact Summit 2026, held at Bharat Mandapam in Delhi, spotlighted the rapid integration of artificial intelligence across every major sector—including fashion, automotive safety, agriculture, and assistive technology for the visually impaired. The summit drew significant youth participation, underlining their pivotal role in shaping a technologically advanced India by 2047. Noteworthy showcases included AI-driven innovations such as advanced virtual changing rooms, AI-enabled vehicles with safety features like collision alerts and blind-spot monitoring, and smart eyewear empowering the visually challenged with real-time information. India’s positioning as a provider of secure, affordable AI and data platforms was emphasized, offering a robust alternative for the Global South and addressing global concerns over data security and privacy. The event also highlighted the support for AI startups under the government-led ₹10,000 crore AI Mission, underscoring policy commitment to fostering an inclusive, secure, and globally competitive AI ecosystem. The summit served as a platform for startups, multinational companies, and technologists to collaborate, exchange ideas, and demonstrate India's capabilities as both a leader and enabler in the global AI domain.
- High youth turnout at the AI Impact Summit 2026; focus on their role in India's future by 2047.
- Demonstrations of AI-powered innovations: virtual changing rooms for retail, AI-enabled bikes and cars enhancing road safety, and smart eyewear for the visually impaired.
- Highlight of India's offering of secure and affordable AI data platforms, addressing global data security concerns and appealing to Global South nations.
- Presence of multinational AI companies and startups, fostering competitive and collaborative innovation.
- Reiteration of government commitment through the ongoing ₹10,000 crore AI Mission to support AI startups and ecosystem growth.
- Emphasis on AI's multidisciplinary impact—from agriculture and healthcare to social and economic sectors.
- Summit positioned India as a global hub for AI, providing a platform for learning, exposure to international technology trends, and ecosystem building.
The 2026 Scaling Playbook: How to Build Anti-Fragile AI Startups
The session at the India AI Impact Summit 2026 brought together industry leaders, startup founders, and policymakers to discuss the rapid evolution of AI infrastructure, the rise of 'lean giants' in the startup ecosystem, legal technology challenges, and government initiatives aimed at bolstering India's AI capabilities. Panelists highlighted the accelerating pace at which powerful AI models are becoming accessible, driving significant shifts in developer tooling, business productivity, and startup scalability. Emphasis was placed on the emerging class of high-revenue, low-employee startups—enabled by AI— and the shifting expectations of investors and markets. Legal tech pioneers described practical approaches to managing AI ‘hallucinations’ for sensitive contract work, underscoring the importance of transparency and user control. Government representatives detailed a growing stack of public AI infrastructure (with over 38,000 GPUs and substantial data initiatives) that offers startups a resilient platform to address uniquely Indian and global south problems, stressing that successful ventures must retain agility and a sharp focus on solving real, scalable problems rather than deploying AI for its own sake.
- Google's Gemma open-source models now surpass Gemini 1.5 Pro, are lightweight enough for mobile, and are nearly costless to deploy.
- Shift towards software and documentation built for AI agents rather than directly for human users, with increased personalization.
- Emergence of 'lean giants'—startups achieving high revenues with very small teams (e.g., Emergent: $100M run rate with 100-200 employees; productivity of $500,000 per employee vs. Infosys at $50,000).
- Extreme example: Claudebot built by a single developer, sold to OpenAI for a speculative $50M.
- AI-driven productivity allows startups to massively outperform legacy companies in cost efficiency.
- Addressing concerns about AI taking jobs, panelists cited Jevons Paradox: as AI makes tasks cheaper, demand grows, supporting more SMBs and micro-entrepreneurs.
- In legal tech, trust is built via full visibility of AI decisions, past performance monitoring, and user interfaces that keep humans in the decision loop.
- Transition in legal AI from custom models for each client to leveraging mature, general-purpose models as data volume and model reliability have increased.
- India's public AI infrastructure now includes access to over 38,000 GPUs and substantial public data sets to support startup innovation.
- Government strategy emphasizes building for India's diverse domestic market to ensure resilience to global economic trends and applicability to other global south markets.
- Startups are encouraged to focus on real, scalable problems, rather than AI-washing or chasing trends, and to maintain agility for rapid pivots.
- Spotra, a legal tech AI startup, received recent fundraising and is aggressively hiring.
Cracking the AI Skill Code: How to Stay Relevant in the Age of Intelligence
The session set the context for a comprehensive conversation on the intersection of AI, industry, and education at the India AI Impact Summit 2026. Professor Sundar highlighted the pivotal role of Bit Pilani’s work-integrated learning programs, emphasizing the institution's longstanding commitment to industry-academia collaboration for reskilling and upskilling employed professionals. He stressed the need for foundational education, not just tool-driven training, to prepare individuals for the continuously evolving AI landscape. The discussion outlined clear responsibilities for various stakeholders: individuals must pursue continuous and foundational learning; educational institutions should provide accessible, equitable AI education; organizations are responsible for nurturing a culture of continuous development rather than cyclical hiring and firing; governments must invest in and influence education for societal good. These points were further illustrated through relatable personas—an employee navigating the AI-transformed workplace and a leader steering organizational adaptation. Both personas face the dual challenge of skill development and mindset shift in the face of rapid technological change. The session strongly argued that cracking the 'AI skill code' requires both technical competence and adaptability, using real-world anecdotes to reinforce the message that mindset is as crucial as capability in thriving amidst AI disruption.
- Bit Pilani’s Work Integrated Learning Programs division has facilitated industry-academia engagement for 46 years, focusing on reskilling without requiring career breaks.
- The university has proactively launched a master's-level AI program for working professionals, running successfully for 2.5–3 years.
- Professor Sundar outlined stakeholder responsibilities: individuals must acquire foundational education and adaptability, institutions should mandate equitable AI access, organizations must foster a learning culture over 'hire and fire,' and governments need to invest in AI education and policy guidance.
- AI-readiness for professionals is characterized by adaptability, emotional intelligence, and continuous on-the-job learning alongside foundational knowledge.
- Anecdote: An Uber driver's experience illustrates that mindset shifts—overcoming fear and embracing technology—are essential for successful transition in technology-driven workplaces.
- The session demystifies that jobs typically transform rather than disappear with technological change, emphasizing the need for reskilling.
- The discussion serves as a prelude to an expert panel focusing on AI’s pervasiveness and industry impact.
- Cracking the AI skill code is a dual process: building technical capability and cultivating a forward-thinking, adaptive mindset—not merely learning new tools.
How AI Is Strengthening Resilient Infrastructure
The session, 'AI for Disaster Resilient Infrastructure,' convened prominent experts from government, academia, industry, and international organizations to examine the role of artificial intelligence in boosting the resilience of infrastructure against disasters, particularly in the context of climate change and extreme weather events. The discussion highlighted India's pressing vulnerability to infrastructure losses—citing figures upwards of 19,000 crore INR in direct damages from recent disasters—and the looming challenge that over half the infrastructure needed by 2050 is yet to be built. The panel underscored AI's potential as a strategic enabler in infrastructure planning, risk assessment, real-time mitigation, and post-disaster response, referencing both global datasets and on-ground case studies from India (e.g., Satbaya village and coastal erosion). Key issues included the need for moving AI tools from experimental phases to scalable deployments, bridging the digital divide and governance gaps, integrating AI across data value chains, improving system connectivity, and ensuring technology is accessible and trusted by state actors. The panel called for strategic, multi-sectoral frameworks to mainstream AI into disaster management, leveraging upcoming investments (e.g., India's 12 lakh crore capital expenditure in the 2026 budget) to ensure that new and existing infrastructure are not only smarter but more resilient for people and the planet.
- India faces average annual infrastructure losses exceeding $700 billion globally; for India alone, direct infrastructure asset damage in 11 states totaled 19,000 crore INR in just two years.
- Indirect infrastructure losses can be up to 7.4 times higher than direct physical asset damages.
- More than half the global infrastructure required by 2050 has yet to be built, offering a unique opportunity to embed AI-driven resilience from the outset.
- AI has transitioned from a novel experiment to a critical tool in risk analysis, recovery planning, and adaptive infrastructure management for disasters.
- Case study highlighted: Satbaya village in Odisha faces ongoing challenges from both sea and river erosion, demonstrating the complex, multi-hazard risk landscape and underscoring the potential role of AI in forecasting and mitigation.
- Resilience AI and other local initiatives received recognition (e.g., award for Resilience 360) for innovative use of AI in disaster management.
- India's 2026 budget includes 12 lakh crores in capital expenditure for major projects (e.g., seven new rail corridors, 20 national waterways), further necessitating resilient planning.
- Key barriers identified: scaling pilot AI programs into mainstream, overcoming digital divides, building governance capacity, and addressing skepticism around AI adoption.
- Panel structured around three pillars: strengthening the data value chain for resilience, enhancing AI-enabled connectivity and collaboration, and optimizing asset/network performance through AI.
How to Build Global AI Incident Monitoring & Response
The session at the India AI Impact Summit 2026, organized by The Future Society in collaboration with the OECD AI expert group, focused on the growing significance and complexity of AI incidents globally. The discussion highlighted the inadequacy of current incident detection and monitoring systems, emphasized the challenges in cross-border data sharing and response coordination, and stressed the need for rigorous governance frameworks. Panelists noted that while AI incidents—from cyberattacks to manipulative systems—are increasing in frequency and severity, responses remain reactive and fragmented, often hindered by inconsistent definitions, lack of comprehensive reporting requirements, and unclear accountability structures. The session called for building robust, interoperable, and proactive international infrastructure for AI incident prevention, reporting, taxonomy development, and sharing actionable insights with policymakers, drawing parallels with aviation and social media regulation. Key technical and policy questions were raised regarding information-sharing, institutional responsibility, early warning mechanisms, and creating effective feedback loops to better anticipate and mitigate AI-related risks.
- AI incidents are increasing in frequency, scale, and severity globally, affecting critical infrastructure and vulnerable populations.
- Current incident monitoring relies heavily on scraping mainstream media, leading to incomplete and unsystematic detection.
- There are major gaps in cross-border coordination and data-sharing due to diverse definitions, reporting requirements, and taxonomies (e.g., EU AI Act, California's SB43).
- Accountability remains difficult to assign due to the complexity of the global AI value chain and the lack of a standardized liability regime.
- Preparedness and response capabilities are lagging, with calls for more systematic and coordinated international approaches.
- OECD and other organizations are working on incident classification schemas but acknowledge missing feedback loops to guide policymakers effectively.
- Incident reporting systems are uneven worldwide; some types of incidents, such as deepfakes and election interference, are rising steeply.
- The session stressed the need for a proactive, interoperable infrastructure for incident reporting, analysis, and governance.
- Technical and policy questions remain open on how to scale taxonomies, establish robust information-sharing protocols, and link incidents to threshold/risk warning systems.
AI from Finland: Driving Innovation for Resilience and Sustainability
The session at the India AI Impact Summit 2026 focused on the rising frequency, severity, and global impact of AI incidents, such as AI-enabled cyber attacks, manipulative AI systems, and the proliferation of deepfakes. Experts from international organizations and governments discussed the inadequacy of current incident detection methods, the challenges of cross-border data sharing and taxonomy interoperability, and limitations in existing governance frameworks. They emphasized that the current system relies excessively on media-reported incidents rather than systematic or mandatory reporting, which leaves many less visible but potentially serious issues undocumented. The panel highlighted the urgent need for robust, proactive, international infrastructure for AI incident monitoring, classification, accountability and preparedness. Key recommendations included developing more comprehensive and enforceable reporting standards, establishing systematic information-sharing protocols, closing feedback loops with policymakers for more proactive response, and evolving global taxonomies to better identify and learn from emerging threats. The session concluded by underscoring the importance of continuous monitoring and governance beyond the pre-deployment stage, given the rapid technological advancements in AI.
- AI incidents—including cyber attacks on critical infrastructure and manipulative AI systems—are increasing in frequency, scale, and severity.
- Current incident monitoring is heavily reliant on media reports, leading to non-systematic detection and data gaps; social media monitoring has declined due to access limitations.
- Mandatory incident reporting is in early stages in some jurisdictions (e.g., EU AI Act, California SB43), but enforcement and global interoperability remain unresolved.
- There is significant inconsistency and lack of agreement on definitions and taxonomies of ‘AI incidents’ across countries and regulatory bodies.
- Accountability is challenging due to the complex and globalized AI value chain; liability regimes for AI incidents are not yet established.
- Preparedness and coordinated incident response capacity are insufficient; enhanced cross-border infrastructure is essential.
- OECD's AI Incident Working Group is developing taxonomies for incident classification and working to provide more proactive recommendations for policymakers.
- Certain types of incidents (like deepfakes) have increased due to advances in AI tools, while public attention to others (e.g., autonomous car accidents) has waned over time.
- Current risk management frameworks (e.g., NIST’s AI Risk Management Framework) emphasize pre-deployment evaluation, but post-deployment monitoring needs much greater attention.
- Panelists called for improved information-sharing protocols, robust data governance structures, and clearer assignment of institutional responsibility for prevention and response.
- AI incidents should serve as early warning signals or ‘wake-up calls’ for strengthening policy, governance mechanisms, and defining red lines for AI risk.
From Dependency to Autonomy: The Case for Open Source AI | AI Impact Summit 2026
The session, moderated by Amanda Brock (CEO, Open UK), featured a distinguished panel discussing 'Building Resilience and Breaking Dependency in Enterprise and Public Sector AI: How can open source support this?'. Panelists included Mishi Chre (Software Freedom Law Center), Anastasia Stenko (CEO, Pers), Laura Gilbert (Tony Blair Institute, former UK government AI leader), and Jimmy Wales (Wikipedia founder). The discussion centered on the nuanced distinction between sovereignty, dependency, and resilience within the context of AI and open source. The panelists collectively emphasized the need to reduce dependency on a handful of global tech companies and to champion open, local, and culturally diverse AI models that empower communities and businesses without resorting to isolationism or protectionism. China’s open source policy shift and proactive strategy were highlighted as a model for national resilience. Concerns were raised not only about lost control over technological supply chains and national competitiveness but also about subtler risks—such as cultural homogenization—when too few commercial models dominate. Open source was depicted as a pathway to both technological empowerment and cultural representation, while ensuring legal, ethical, and competitive considerations are maintained. Individual data privacy, local innovation, and policy clarity were highlighted as crucial for building resilient and competitive AI ecosystems going forward.
- The session convened leading voices from law, industry, and government to discuss resilience and breaking dependency in AI via open source approaches.
- There is growing concern globally over dependency on a few dominant tech companies for AI solutions, with risks extending beyond national security to loss of cultural and business competitiveness.
- Multiple panelists rejected a narrow or protectionist notion of 'sovereignty,' emphasizing collaborative, cross-border innovation to avoid technological solitude.
- Open source AI—including local models run on powerful consumer hardware—is advancing and significantly narrows the gap with proprietary models, aiding privacy and self-sufficiency.
- China’s policy shift since 2017—developing a comprehensive national open source ecosystem and its inclusion in the current Five Year Plan—was cited as a successful case study for building technological resilience.
- Cultural representation and diversity in AI models is at risk if supply chains remain concentrated, potentially undermining less globally dominant languages and social contexts.
- For enterprises, dependence on homogenous closed AI platforms can erode competitive advantage by centralizing unique business intelligence in external, global tools.
- True empowerment and competitiveness in AI requires that not just code or model weights, but also the key training data, be open and transparent.
- The group called for precise policy language, greater investment in open alternatives, and a clearer ontology to navigate emerging issues of dependency, resilience, and sovereignty.
Building Trustworthy AI in Digital Public Infrastructure
This high-level session at the India AI Impact Summit 2026, focused on 'Governing Safe and Responsible AI Public Infrastructure,' convened distinguished leaders from Estonia, Switzerland, and Lithuania to discuss best practices, policy frameworks, and lessons learned in the deployment of AI within digital public infrastructures (DPI). Speakers emphasized the paramount importance of AI systems that are transparent, accountable, human-centric, and anchored in international human rights standards. They highlighted the role of multistakeholder collaboration, algorithmic transparency, rigorous oversight, and capacity building, especially in resource-constrained environments. Leaders shared their countries’ efforts in digitalization, AI education, cyber security, and the development of legal frameworks guiding ethical AI use. The recurring theme was that technology must serve people, not the other way around, and that sustained trust in public digital systems is only possible through deliberate governance choices, inclusion, and global cooperation. The Freedom Online Coalition’s rights-respecting Digital Public Infrastructure Principles and international initiatives such as the Vilnius Convention set essential standards for reliable, inclusive, and rights-anchored AI deployment in public systems.
- The session brought together representatives from Estonia, Switzerland, and Lithuania, plus the Freedom Online Coalition, a group of 42 governments advocating digital rights and freedoms.
- Algorithmic transparency, human-centered governance, and regulatory best practices are identified as essential, not optional, for AI-driven digital public infrastructure.
- Estonia emphasized AI literacy in education and the need for AI to complement—not replace—human judgment in public services.
- Switzerland reiterated the necessity of anchoring AI systems in international human rights law, with robust legal bases and safeguards against bias, and called for global cooperation to avoid a fragmented approach to AI governance.
- Lithuania shared its experience with interoperable state registers, e-residency programs, and highlighted that 90% of adults use qualified e-signatures for public services.
- Cybersecurity and digital resilience are top priorities in Lithuania, which has established a national AI hub and is an advocate for the Vilnius Convention—the world’s first legally binding treaty on AI and human rights.
- Speakers warned against AI being used for surveillance or discrimination and stressed the need for public authorities to maintain final accountability even when private actors are involved.
- Capacity building, multistakeholder involvement, and continuous public engagement—including civil society and marginalized groups—are positioned as critical for inclusive and trustworthy AI infrastructure.
- The session reinforced the idea that successful digital transformation hinges on active societal oversight, transparent decision-making, and protection of digital rights in practice.
Smart Satellites, Smarter Systems: AI Across the Space Ecosystem | AI Impact Summit 2026
The session 'Beyond Earth: How AI is Powering a New Era of Space Exploration and Space Applications' at the India AI Impact Summit 2026 brought together key leaders from IN-SPACe, ISRO, academia, and private industry to discuss the transformative role of artificial intelligence (AI) in India's space ecosystem. Dr. Vinod Kumar, Director (Promotion) at IN-SPACe, opened the discussion with a keynote detailing the evolution from traditional space technologies to sophisticated AI-enabled systems, such as cognitive space intelligence and real-time edge AI computing onboard satellites. A major announcement was the launch of a new seed fund scheme, offering grants up to ₹1 crore for AI-driven space startups. The panel featured leading experts who discussed breakthroughs in AI-powered robotics (Bhumitra for Gaganyaan), autonomous lunar landing using AI and AR/VR simulations, high-efficiency edge AI for data reduction and onboard analytics, and real-world applications from agriculture to autonomous navigation in space. Key technical challenges such as limited data, resource-constrained environments, and need for high-accuracy edge models were addressed. Innovative approaches like simulated lunar environments, digital twins, spiking neural networks, and drastic model size reduction techniques were highlighted as pivotal for enhancing mission success and accessibility. Collectively, the panel charted a future where India’s fusion of AI and space tech is poised to not just keep pace with global advancements but set new benchmarks for autonomy, adaptability, and downstream applications both in orbit and on Earth.
- IN-SPACe announced an open AI Seed Fund Scheme providing up to ₹1 crore grants for space-tech startups, having already approved/disbursed grants to 11 companies, with six more planned.
- AI-driven space robotics (Bhumitra humanoid) will be critical to the success of ISRO's upcoming Gaganyaan human spaceflight program, with advanced cognitive and voice-interactive capabilities and real-time system diagnostics.
- Autonomous lunar and Mars landing utilizes AI and AR/VR model-based training, enabling decision-making and hazard avoidance in uncertain environments, with AI trained using simulated and painted lunar terrains.
- Edge AI technology on satellites is achieving up to 10x inference speed improvement, 5x model size reduction (e.g., from 300MB to 30MB), and delivering insights from space data directly to users in kilobyte-scale transmissions.
- Innovations in space AI include digital twins for transfer learning, on-board spiking neural networks (SNN) on FPGA hardware, and robust edge computing for real-time, autonomous spacecraft control.
- Panel composition represented cross-sector expertise, including ISRO robotics leads, lunar landing researchers, edge computing entrepreneurs, application data scientists, and autonomy specialists, demonstrating India's growing AI-in-space innovation ecosystem.
- The session marked a strategic policy shift, emphasizing AI as foundational to India's Space Vision 2047 and mission autonomy, rather than merely an incremental upgrade.
Strengthening Primary Care Through Responsible AI Integration | AI Impact Summit 2026
The session at the India AI Impact Summit 2026 focused on the critical challenges and transformative opportunities presented by AI in reshaping health education and workforce readiness. Panelists emphasized that with the rapid advancement of AI, traditional education models in healthcare must evolve to prioritize critical thinking, problem-solving, and interdisciplinary collaboration over rote memorization. The discussion highlighted the widening gap between technological innovation and the preparedness of the workforce, particularly in underserved regions and within generational divides. Speakers stressed the urgent need for systemic overhauls—updating curricula to include AI and digital tools, ensuring equity and inclusion, especially for women and marginalized groups, and building governance mechanisms that coordinate efforts across ministries, countries, and organizations. The session underscored that effective skilling, reskilling, and thoughtful governance are essential to developing a digitally fluent, future-ready health workforce, capable of navigating the ethical, technical, and practical complexities of AI-driven care.
- AI’s rapid evolution is challenging traditional educational models, making critical thinking and problem-solving more essential than ever.
- Existing assessment tools for critical thinking are falling short; education systems must innovate new ways to foster and test these skills.
- There is a significant digital divide within and between countries, especially affecting primary health workers and underserved regions.
- A vast network is being leveraged to address these challenges, with around 157 think tanks across 90 countries participating.
- Equity considerations, especially regarding gender and cultural barriers, are often overlooked in tech-driven initiatives and must be prioritized.
- Medical and paramedical curriculum reforms are urgently needed, incorporating AI, data science, EHRs, and research methodology at undergraduate and postgraduate levels.
- Ongoing training and upskilling must reach not just students but program managers and reviewers to ensure rigorous evaluation of AI-powered research proposals.
- Interdisciplinary education that combines medical, engineering, and data science knowledge is necessary to prepare for AI-augmented healthcare.
- Future health systems will be more inclusive, moving beyond medical schools to involve diverse professionals skilled in quantitative thinking and innovation.
- Strategic cross-ministerial and cross-national governance structures are crucial to harmonize efforts in health, education, and technology policy.
Planet and Progress: AI Solutions for Urban Resilience | AI Impact Summit 2026
The session at the India AI Impact Summit 2026 explored the intersection of artificial intelligence with urban planning, governance, and infrastructure in India. Featuring a distinguished panel with expertise in urban government, AI hardware and software, energy and water management, and supply chain security, the discussion highlighted the centrality of cities to India's future economic growth and the urgent need for intelligent solutions to complex urban challenges. Panelists emphasized the transformative potential of AI in making urban planning more evidence-based, responsive, and equitable, as well as its role in real-time management of public services like waste management, mobility, and simulations for disaster response. They stressed the importance of upskilling the urban planning workforce and developing integrated, institution-wide approaches to data collection and utilization. Examples from India and abroad, such as AI-driven command centers and advanced physical AI for perception and simulation, were cited as key enablers. The session underscored that AI's deeper integration into city systems and institutions is critical to enhancing quality of life and achieving sustainable, resilient urban growth in India.
- India's economy is increasingly urban-centered, with cities contributing 80% of GDP and projected to house over 600 million people by 2036.
- There is a significant deficit of trained urban planners in India: only 5,000–6,000 available against an estimated need of 12,000 according to Niti Aayog.
- AI can revolutionize urban planning by enabling the collection, sanitization, assimilation, and real-time analysis of urban data, overcoming the current limitations of manual and siloed data usage.
- Examples from Pune and Surat show practical AI deployments in solid waste management and integrated command centers for city operations.
- Emerging 'physical AI'—combining perception via sensors, cameras, and vision-language models—enables unified applications for traffic, safety, and infrastructure management.
- AI-powered simulations can now model real-world impacts of events like floods, enabling better disaster preparedness and urban responsiveness.
- Panelists advocated for upskilling the existing urban workforce and developing frictionless, integrated institutional frameworks to harness AI's full potential in governance.
- Quality of life, equity in public service delivery, and responsive urban infrastructure are positioned as key metrics for AI-enabled urban growth.
Full-Stack AI with Google | From Infrastructure to Innovation
The session at the India AI Impact Summit 2026 spotlighted Google's extensive ecosystem for AI development and startup acceleration in India. Speakers detailed the impact of the Google for Startups Accelerator—which has cultivated 1,700+ alumni globally, generated $30 billion in funding, and created 110,000 jobs—while emphasizing hands-on resources like Build with AI events and bootcamps for startups and developers. A case study of the EdTech startup Tentry illustrated the transformative effect of AI, particularly Gemini's multilingual capabilities, in addressing India's local language learning challenges. Representatives from Google DeepMind and Google Cloud further highlighted the rapid evolution of AI tooling, accessible platforms (AI Studio, Gemini), and the democratization of application development. The session underscored India's role as a torchbearer in AI adoption, the blurring of traditional development roles, and the accelerating product cycles driven by integrated, powerful AI stacks—echoing a landscape where foundational development principles now merge with cutting-edge capabilities. Opportunities for aspiring startups to join new accelerator cohorts and to leverage developer workshops were also promoted.
- Google for Startups Accelerator has run for 10+ years, facilitated 20 cohorts, and includes 1,700+ alumni globally.
- Alumni startups have raised over $30 billion post-accelerator and created approximately 110,000 jobs.
- Build with AI events and specialized bootcamps, such as partnerships with Intelligent Agents, have equipped over 200 Indian startups to build AI agents.
- Case study: Tentry, an Android-first EdTech startup, leveraged Google's Gemini AI to provide local-language education, resulting in a 20% increase in user retention.
- Gemini's capabilities in Indian languages allowed Tentry to scale content delivery and introduce AI-powered teaching assistants and interview coaches.
- AI Studio and Gemini platforms are promoted for hands-on developer engagement, model deployment, and experimentation.
- Accelerator program applications for 2026 are open until April 19, targeting AI-first startups.
- Ongoing builder days, developer events, and workshops offered in cities across India (including Delhi and Bangalore) for practical skill development.
- India is recognized as a global leader in adopting and safely deploying AI technologies.
- Product cycles and development roles are rapidly evolving—low/no-code tools enable broader participation, and software can ship in minutes instead of months.
From Simulation to Reality: The Rise of Physical AI | AI Impact Summit 2026
The session presented a comprehensive vision of 'physical AI' as the next major wave in artificial intelligence, surpassing previous advances in perception-based and generative models. The speaker articulated how physical AI—encompassing autonomous vehicles, advanced robotics, and AI-powered factory/warehouse systems—stands poised to revolutionize the untouched physical world, unlocking multi-trillion-dollar industries such as manufacturing, logistics, healthcare, and retail. Key differentiators and challenges for physical AI were outlined, such as handling vast multimodal sensor data, bridging the data digitization gap, ensuring rigorous real-world evaluation, and enabling edge computing for low-latency, secure operations. India was positioned as uniquely advantaged due to its talent pool and burgeoning manufacturing ecosystem, creating a 'virtuous flywheel' of opportunity. The talk further detailed Nvidia’s approach to overcoming obstacles, including its 'data pyramid' strategy (leveraging human/web video, simulation, and real-world fine-tuning), and highlighted Nvidia's commitment to building a three-pronged computing platform: centralized datacenters, simulation computers for synthetic data, and powerful edge devices like Jetson. Crucially, Nvidia is making its software stack for physical AI fully open to empower global and localized innovation, with frameworks, tools, and open models providing a foundation for developers to specialize solutions for their own contexts.
- Physical AI—AI embodied in the real world through robots, autonomous vehicles, and intelligent factories—is the imminent next wave, with potential impact far exceeding that of digital, generative, or agentic AI.
- Vast economic opportunity: Multi-trillion-dollar industries (manufacturing, logistics, healthcare, retail) have seen minimal AI transformation compared to software.
- Physical AI relies on multimodal sensor data (vision, touch, spatial, temporal, speech, etc.), creating new data challenges including continuous streams and a lack of digitized operational data.
- India’s advantage: Large IT and software talent pool and a growing manufacturing base position the country as a leader in adopting and innovating in physical AI/robotics.
- Nvidia’s strategy includes a 'data pyramid' comprising web/human video, simulated synthetic data, and final real-world calibration for robust physical AI model training.
- Transition highlighted from specialist robots (one task) to generalist and 'generalist-specialist' robots, enabled by pre-trained, multipurpose models and hardware.
- Nvidia’s computing infrastructure for physical AI covers three layers: (1) traditional datacenters, (2) simulation computers for synthetic data creation, and (3) edge computers (like Jetson, offering up to 2,000 TFLOPs on-device) to deploy AI brains locally.
- All Nvidia's physical AI enabling software (models, tools, frameworks) is being made fully open, to accelerate global and domain-specific robotics/AI development.
India’s AI & IP Strategy: Driving Innovation and National Advantage
The session opened with Dr. S. Chakravarti's keynote, emphasizing the critical intersection of artificial intelligence (AI) and intellectual property rights (IPR) in India's ambition to become a global leader in AI. Dr. Chakravarti articulated both the transformative benefits and challenges of AI in sectors like healthcare and IPR processing, while cautioning about potential downsides such as cybercrime, surveillance, and copyright disputes. He advocated for a balanced regulatory approach—where existing laws are adapted but not rushed, innovation is protected without compromising public interest, and AI is seen as an augmentation of human intelligence. Panelists highlighted the evolving complexity of AI-related IP ownership, especially for open-source and startup innovation in India, stressing that transparent frameworks and balanced policies are necessary to accelerate innovation while ensuring robust IP protections. The discussion acknowledged India's fast growth as a startup hub, the need for democratization of AI, and the opportunity to define leadership through pragmatic, step-wise regulation.
- Dr. Chakravarti underscored AI as a 'multiplier effect' technology that should augment—not replace—human intelligence.
- India's AI mission includes pillars of safety, deepfake detection, bias identification, and unlearning models.
- He outlined the 5 V's of big data in the context of IPR: Velocity, Volume, Value, Variety, and Veracity (Trustworthiness).
- AI can greatly improve efficiency and accuracy in IPR processing, including patent searches and data analysis.
- Major downsides highlighted include AI-driven cybercrime, surveillance risks, and generative AI's infringement on traditional copyrights.
- Dr. Chakravarti called for a balanced approach: existing data protection and IT laws address some, but not all, AI risks; India should 'hasten slowly' with new AI regulation.
- Public interest should override restrictive international agreements (e.g., TRIPS), and patient welfare should outweigh patent interests in health tech.
- Panelists highlighted the importance of open source for democratizing innovation in India, but stressed the need for clear IP frameworks to protect downstream creation.
- Prime Minister Modi was cited acknowledging India's position as the world's third-largest source of startups, underlining the strategic importance of open source for scaling innovation.
- Ownership and ROI of AI-generated content and models were recognized as emerging, complex challenges in India's evolving AI ecosystem.
Responsible AI at Scale: Building Trust, Governance & Cyber Resilience
The session at the India AI Impact Summit 2026, moderated by Major Vinit Kumar (founder and global president of Cyberpace), convened a distinguished panel of global cybersecurity leaders, public policy experts, and academics to address the urgent need for Responsible AI at scale. The discussion highlighted India's unique position as a democracy and digital powerhouse, capable of leading not just technological innovation but also conversations on governance, integrity, and cyber resilience. Key announcements included the development of region-specific frameworks such as the FIST framework to ensure transparency, integrity, safety, and trust at the heart of AI systems and the emphasis on collaboration between government, industry, academia, and civil society to operationalize responsible AI. With India housing 40% of the global AI talent and women in STEM, panelists underscored the importance of leveraging national strengths to create inclusive, equitable governance models that address new threats like deepfakes and AI-generated disinformation, while also maintaining trust and safeguarding critical infrastructure. The panel stressed moving from aspiration to action by focusing on how to implement responsible AI in practice for a secure and prosperous technological future.
- Panel theme: Responsible AI at scale, focused on governance, integrity, and cyber readiness with an executional—not theoretical—emphasis.
- India identified as a digital powerhouse and voice for the Global South, aiming to influence the global narrative on trusted AI.
- Cyberpace, a global nonprofit based in India, collaborates with international organizations and hosted the summit.
- Eminent panelists included leadership from EC Council, Cloudflare, InMobi, and academic representatives from Russia, with expert participation from the Commonwealth Secretariat.
- Key frameworks discussed included the FIST framework (integrity, safety, and trust) developed jointly by Cyberpace, USI, Mastercard, and Tata Sons.
- India houses 40% of global AI talent and women in STEM, underscoring the nation’s capacity to lead and implement responsible AI practices.
- Critical need for governance models tailored to India and the Global South—eschewing mere adoption of Western models.
- Practical strategies advocated: watermarking to combat deepfakes, social media accountability, and expanding rural and inclusive access.
- Emphasis on collaboration across government, industry, academia, and civil society for deploying responsible AI frameworks.
- Announcement of the Cyberpace Global Index, the first responsible AI and ethical AI index from the Global South, set to go live by end of year 2026.
- Re-affirmed that operationalizing responsible AI is essential to national resilience, democratic trust, and economic opportunity.
AI for MET: Catalyzing the Next Era of Intelligent Manufacturing | AI Impact Summit 2026
The session at the India AI Impact Summit 2026 marked a significant step in India's industrial transformation through the formal launch of a concept white paper and the announcement of a national Manufacturing Engineering and Technology (MET) platform aimed at embedding AI across the manufacturing sector. Backed by the Ministry of Electronics and Information Technology (MeitY), and led by the new MET Institute (NAME), the initiative seeks to foster a close partnership between industry, academia, and government, targeting talent development, inclusive AI adoption especially among MSMEs and rural enterprises, and the advancement of precision manufacturing. The MET platform and its supporting initiatives—as articulated by key leaders including Dr. Ibrahim Hafiz Rahman, the Honorable Minister, and MIT's Dr. Eric Grimson—will enable scalable, secure, and sustainable AI integration in production and supply chains, establish shared testbeds, promote data standardization, and create robust talent pipelines. A key component is the formation of an expert executive council to guide the rapid transition from strategy to execution within the next six months, with the goal of advancing India’s GDP, driving employment, and positioning India for global leadership in smart manufacturing.
- A new national Manufacturing Engineering and Technology (MET) platform was publicly launched to catalyze AI-driven transformation of India's manufacturing sector.
- The initiative is convened by the new MET Institute (NAME), with strong backing from the Ministry of Electronics and Information Technology (MeitY) and a strategic partnership with the Massachusetts Institute of Technology (MIT).
- A concept white paper on AI for manufacturing was released, outlining the roadmap for industry, academia, and governmental collaboration.
- An executive council will be established—comprising policymakers, industry leaders, and academics—to refine and implement the MET white paper within 3–6 months.
- Key focus areas include talent development, workforce transformation (particularly among MSMEs and in rural areas), deployment of AI at scale, and inclusivity in innovation.
- The MET platform is aligned with the Viksit Bharat 2047 vision, aiming for manufacturing to exceed 25% of India’s GDP.
- Plans include the creation of shared testbeds, development of data governance frameworks, and tools for predictive maintenance and supply chain optimization.
- MIT and NAME reaffirmed their commitment to supporting India’s journey to global manufacturing leadership, with a focus on ethical and secure AI deployment.
- India has seen tangible results: several global firms in aerospace and precision manufacturing are already establishing supply chains domestically due to India's growing talent base.
- There is an explicit emphasis on rapid, inclusive, and scalable scaling of AI technology—moving from pilots to widescale adoption, especially for MSMEs.
Towards a Multilateral Agreement on Enforcing Red Lines
The panel session at the India AI Impact Summit 2026, hosted by the Ada Lovelace Institute, brought together international experts to debate how best to define and enforce 'red lines'—the boundaries AI systems must not cross to protect individuals and societies amidst rapidly evolving technologies and global power dynamics. Key approaches discussed included risk assessment frameworks drawn from both traditional corporate models and absolute safety standards, as well as regulatory and legislative initiatives currently being advanced in Brazil and the EU. The panel emphasized the need for international cooperation on AI governance, the importance of sectoral and context-specific applications of AI 'red lines,' and the challenges of keeping regulations up to date with technological innovation. Panelists argued for a shift from mere harm prevention to ensuring the flourishing of rights and dignity, integrating labor, ecological, and human rights—especially from Global South perspectives—into governance regimes. Issues of digital sovereignty, the gaps in multilateral engagement, and concerns about the limits of self-regulation were highlighted, alongside calls for inclusive, adaptable, and enforceable global AI norms.
- The Ada Lovelace Institute centers diverse public voices and empirical evidence in shaping AI governance.
- Two models for defining AI 'red lines' were contrasted: corporate risk appetite/assessment and an absolute, safety-first ('vaccine model') approach for extreme risks.
- Brazil is advancing a draft AI law using sectoral adaptation, classifying AI system risks, and defining specific unacceptable applications such as mass manipulation, predictive criminal profiling, autonomous weapons, and biometric surveillance.
- The Brazilian framework takes a context-focused approach, targeting AI application domains rather than technical mechanisms.
- The EU AI Act's Article 5 sets out 'red lines' but has struggled to keep pace with emerging harms such as non-consensual explicit image generation.
- Calls were made for baseline human rights assessments and post-market monitoring of all AI systems—not just high-risk cases.
- The need to factor labor, ecological impact, and knowledge commons into global AI governance, especially reflecting constitutional perspectives from the Global South.
- Recognition that multilateral AI governance is nascent and that the self-regulation pledge model (e.g., the Seoul Commitment) is insufficient, as real harms have occurred despite voluntary industry commitments.
- Kenya and the African continent were highlighted as major AI users despite minimal participation in development or global norm-setting, underlining digital inequity and advocating for more inclusive international dialogues.
AI Transformation in Fintech: Smarter, Faster, More Inclusive
The panel discussion at the India AI Impact Summit 2026, hosted by IIT Bombay, brought together leaders from academia, fintech, insurance, and IT services to explore the practical transformation AI brings to India's financial sector. The session emphasized that AI's impact extends beyond trending generative tools like ChatGPT, with a decade-long evolution from rule-based automation to advanced intelligent decision-making. Panelists shared concrete use cases demonstrating productivity gains, process re-engineering, and hyper-personalized customer solutions across NBFC and insurance contexts, citing examples such as automated document verification, AI-powered loan processing, and rapid product development cycles. Prevalent themes included the criticality of trust, explainability, and model verification in AI systems, as well as the real-world hurdles of user adoption and change management. The insights reflect an industry rapidly scaling intelligent automation, underpinned by close academia-industry collaboration and a focus on business value rather than technology fads.
- AI's role in fintech has evolved from basic automation to advanced intelligent decision-making.
- IIT Bombay's Technology Innovation Hub focuses on translating AI research from lab to market, touching industry and society.
- Hersh Kumar (Punalavala Finance) reported 15-40% productivity improvements in credit assessment by automating document verification and CAM writing using AI.
- Punalavala Finance's AI adoption now spans seven product lines, with significant downstream impact.
- Anjini Bhardwaj (HDFC Ergo) highlighted that 84% of customer servicing is fully digital, and AI solutions have achieved 20-30% process efficiency gains across policy issuance, underwriting, servicing, and claims.
- HDFC Ergo's AI-enabled policy system reduced product launch timelines from months to weeks.
- Generative AI (GenAI) and traditional AI are increasingly combined for document processing and customer-centric solutions.
- Mukund Kanan (Emphasis.ai) stressed the dual challenges of bringing AI into production: matching technology to real business needs and managing user adoption.
- Emphasis.ai's intelligent document processing now supports major US home loan processors, showcasing India's AI capabilities at a global scale.
- Trustworthiness and accuracy are key, with IIT Bombay leading efforts in AI model verification—moving beyond accuracy to ensure models behave reliably.
- The panel cautioned against AI adoption driven solely by hype (FOMO), urging focus on genuine business problems for scalable success.
- User and stakeholder change management is often underestimated and remains a barrier to large-scale AI deployment.
A Billion Voices: Scaling Language AI from Data Centres to Local Dialects
The session at the India AI Impact Summit 2026 highlighted an innovative partnership between Intel and Bhashini to integrate advanced language AI capabilities—such as speech recognition, multilingual translation, and generative AI—directly into on-device AI-powered PCs (AIPCs). By embedding Bhashini’s technologies at the edge, these systems provide low-latency, privacy-preserving, and accessible digital educational solutions even in areas with limited connectivity, advancing digital inclusion and supporting India’s vision for a multilingual digital public infrastructure (DPI). Intel leaders, joined by officials from the government and Bhashini, detailed the need to make AI both affordable and accessible nationwide, emphasizing the importance of an open ecosystem, scalable computing, collaboration with local startups and institutions, and addressing linguistic and connectivity barriers for learners and innovators. Real-life use cases showcased how these AIPCs are already empowering students and teachers across rural and urban networks, enabling vernacular content creation, innovation labs, and removing connectivity as a barrier to AI adoption. The collaborative AI edge-DPI model was positioned as a blueprint for inclusive, at-scale digital transformation across education and additional sectors such as healthcare, agriculture, and public services.
- Launch of integrated Bhashini language AI capabilities (speech recognition, translation, text generation) embedded directly in Intel’s AIPCs—functioning offline and supporting 22 Indian languages.
- AIPCs deliver low-latency, privacy-preserving, and accessible educational workflows, benefiting areas with poor or unreliable connectivity.
- Intel and Bhashini's partnership reinforces India's multilingual digital public infrastructure strategy and aims to democratize advanced AI access beyond urban, high-resource settings.
- AI Labs in schools and colleges across India use AIPCs to provide local innovators (students) with hands-on AI tools for learning, experimentation, and solution development—including in remote areas.
- Key challenges addressed include lack of connectivity and limited access to AI toolsets for rural/underprivileged students.
- Intel emphasized four tenets for mass AI adoption: heterogeneous scalable compute, secure/network-independent edge execution, open AI ecosystem/software, and collaborative partnerships.
- Highlight of real-time vernacular translation/transcription for educational content and small enterprises without requiring internet access.
- Commitment to making AI affordable and adaptable to local needs, with potential scale for hundreds of millions or even a billion Indian citizens.
- The collaboration serves as a scalable, replicable model for AI-powered inclusion in sectors like healthcare, agriculture, public infrastructure, and cultural preservation.
- Presence of senior leadership from Intel, Bhashini, and Indian government (including Rajya Sabha Joint Secretary) underscores strong industry-government collaboration.
Unlocking EU–India Opportunities for the Twin Transition
The session at the India AI Impact Summit 2026 highlighted the growing momentum for EU-India collaboration on the twin transition of advancing artificial intelligence (AI) and sustainability. Key leaders from industry, policy, and technology described a historic period marked by the recent conclusion of the EU-India Free Trade Agreement, ambitious joint digital and green policy initiatives, and new frameworks to enable seamless, interoperable digital ecosystems. The roundtable showcased tangible progress, such as mutual recognition of digital signatures, piloting digital wallet interoperability, and significant industry use cases—especially in aerospace and enterprise software—that leverage AI to both increase efficiency and drive sustainability goals. Speakers from Airbus, SAP, and representatives of the European Commission and the Federation of European Businesses in India emphasized the importance of embedding ethical, responsible AI, developing shared regulatory standards, and addressing the resource-intensive nature of AI to create green, resilient, and inclusive technology landscapes for both regions.
- EU-India collaboration has reached new heights, underscored by the recent successful negotiation of the EU-India Free Trade Agreement.
- A major milestone was the signing of a mutual recognition agreement for e-signatures between India and Europe, setting a foundation for greater digital interoperability.
- The European 'business wallet' project and India's Aadhaar/DigiLocker initiatives are being tested and moving towards full interoperability to facilitate both citizen and business mobility.
- Industry examples: Airbus uses AI for predictive maintenance (saving the airline industry $200M annually) and logistics optimization, while SAP highlighted its employment of 15,000+ engineers in India and is driving ‘AI-first’ and ‘green ledger’ approaches for measurable sustainability.
- The EU and India maintain the only active EU Trade and Technology Council, aiming to simplify cross-border business with a digital-first agenda.
- Joint supercomputing and AI research agreements have been established, with a focus on healthcare, aerospace, materials science, and pharmaceutical cooperation.
- AI’s surging energy demands are a shared concern—SAP cited estimates that US AI data centers could soon consume as much electricity as the entire African continent, prompting urgency for greener AI solutions.
- Calls were made for harmonized global regulatory frameworks and standards for AI, given the patchwork of regulations that currently exists.
- The session signaled strong political and technical will from both sides to create a seamless knowledge, business, and technological partnership anchored in values of democracy, inclusion, and sustainability.
AI for Learning Outcomes: Equity, Safety, and System Transformation| AI Impact Summit 2026
The opening session of the India AI Impact Summit 2026 focused on the transformative role of artificial intelligence in education, highlighting both its promise and potential risks. The President of Estonia introduced Estonia’s ambitious 'AI Leap,' a nationwide initiative to rapidly integrate AI skills and tools into the national education system, aiming for widespread digital literacy and practical competence. The approach encompasses teacher training, lifelong learning, research on AI’s educational impact, and special attention to ethical transparency and linguistic diversity through support for small language models. UNICEF’s Global Director of Education echoed these themes, emphasizing the necessity for system-level interventions over fragmented pilots, prioritizing evidence-based outcomes over performance claims, and building equitable, interoperable digital ecosystems. Both parties called for global collaboration and a steadfast focus on equity, teacher empowerment, and protecting the most marginalized students. Estonia’s successful country-wide rollout—achieved in under a year—was attributed to nationwide engagement, holistic training, public trust, and collaboration with the private sector. The session laid the groundwork for exploring scalable, inclusive, and effective AI-enabled educational reforms with concrete, actionable frameworks.
- Estonia launched the 'AI Leap' initiative to embed practical AI skills and tools across its education sector, aiming for the majority of citizens to gain basic AI literacy and at least half to reach intermediate proficiency.
- AI Leap includes systematic investment in teacher and school leader training, development of a digital academy, and nationwide rollout of AI-enabled learning materials.
- Estonia is committed to ethical, transparent use of AI, with special attention to learner awareness and data rights.
- Emphasis on protecting and strengthening small languages and cultural diversity by advocating for open language data and local AI models.
- A major research program accompanies AI Leap to evaluate its impact on learning outcomes, equity, wellbeing, teacher workload, and classroom quality.
- UNICEF highlighted a global education crisis: over 70% of 10-year-olds in low and middle-income countries cannot read or comprehend a simple text.
- UNICEF calls for: (1) whole-system scaling up of AI initiatives (no more isolated pilots), (2) evidence-based focus on learning outcomes, especially for marginalized learners, and (3) interoperable, sustainable digital ecosystems rather than fragmented tools.
- Teacher empowerment recognized as central—AI should support teachers as multipliers of equity and learning.
- Both Estonia and UNICEF advocate for a global framework and movement—combining political leadership, private sector partnership, and public accountability—to ensure AI transforms education equitably and inclusively.
- Estonia’s country-wide AI rollout was successful due to engagement at all system levels, teacher training, strong public trust established through previous digital reforms, and active private sector support.
Building Strong AI & Data Partnerships for Economic and Social Impact
The session opened with a focus on advancing data and AI collaboratives for economic growth and social good, emphasizing the critical need for trust, governance, and inclusive participation. Ambassador Harry Fur highlighted European and Dutch efforts in forging public-private partnerships, trusted data exchanges, and inclusive AI ecosystems, citing GPTNL as a showcase for responsible, benefit-sharing innovation. He underscored the importance of interoperability, standards, and education to ensure sustainable, scalable AI impact. Frederick Warner of the ITU reinforced the global imperative to bridge skills, data access, infrastructure, and governance gaps, drawing on multi-agency UN collaboration and successful global south diffusion models such as India’s digital public infrastructure. Warner advocated for inclusive, standards-driven scale-up of AI for Good use cases, illustrating the high stakes and ethical debates around AI’s transformative power in health, education, and disaster response, especially for vulnerable and under-resourced populations.
- Ambassador Harry Fur emphasized trust as foundational for data sharing and collaborations, calling for clear governance frameworks and legal certainty (e.g., EU Data Act, Data Governance Act).
- The Dutch model features collaborative public-private and civil partnerships, illustrated by GPTNL, where data holders retain control, receive revenue stakes, and ensure responsible data usage.
- Major investments are being directed to AI/data infrastructure (Euro HPC, AI Factory) and human capital through expanded education and reskilling.
- International standards and interoperability were identified as critical enablers for mutual benefit and the value creation of data and AI collaboratives.
- Frederick Warner highlighted the ITU's 'AI for Good' initiatives, involving over 50 UN sister agencies and broad stakeholder inclusion beyond the tech sector.
- New global AI use cases include voice-based blood glucose detection by Estonian startups; pilot projects must address bias, accessibility, and safety for global adoption.
- A United Nations University report identified five global AI scaling pathways: access to good/sovereign data, infrastructure/connectivity, skills, governance/standards, and fostering AI ecosystems blending startups, governments, and academia.
- India was recognized as a diffusion and adoption leader, with its digital infrastructure serving as a model for scaling AI responsibly across the global south.
- Both speakers stressed moving beyond pilots to systemic, measurable impact, with Warner inviting participation in the AI for Good Global Summit (Geneva, July 7–10).
Building Safe and Trusted AI: Ethics, Governance & Accountability
The session at the India AI Impact Summit 2026 focused on the intertwined issues of ethics and governance in the context of safe and trustworthy AI deployment. Opening remarks by the session organizers and the Director General underscored the pervasive nature of AI across sectors—education, research, business—and highlighted concerns around ethical AI use, data privacy, inclusiveness, and the necessity for robust governance frameworks. Professor Dr. Sachin Sharma emphasized the need for AI governance to include voices from the Global South to avoid exclusionary technologies, drawing parallels with UPI rather than nuclear tech in terms of accessibility. Dr. Gita, Director of the National Center for Science Communication and Policy Research, addressed practical and ethical challenges faced by authors, editors, and publishers when integrating AI into scientific writing and publishing. She outlined risks such as data privacy violations, the importance of transparency and accountability in AI use, and the need for human oversight, especially in peer review processes. She stressed responsible AI use, a clear understanding of publisher guidelines, and human-centricity as central tenets. Dr. Morti, a government and academic leader, began to discuss India's regulatory strategies—emphasizing fairness, accountability, transparency, safety, privacy, and the use of regulatory sandboxes for safe innovation. He advocated for risk-tiered governance frameworks with mandatory human oversight, particularly in high-risk applications. The session collectively advocated for inclusive, responsible, and transparent AI deployment with strong ethical anchors and transparent, adaptable governance systems.
- Ethics and governance must be addressed together for safe and trustworthy AI.
- AI usage is widespread across research, business, and education—but often unacknowledged.
- Data privacy breaches are common when confidential manuscripts are processed by AI tools without awareness.
- Dr. Gita emphasized the need for awareness of AI’s data training risks, data privacy, and publisher-specific AI usage guidelines.
- Plagiarism and 'hallucinations' (fabricated references/content) remain key risks in AI-assisted scientific publishing.
- Transparency and accountability for AI use remain the responsibility of humans, not the technology.
- AI deployment risks include built-in biases due to incomplete or non-representative data sets.
- Major publishers differ in AI usage policies, but consensus exists that AI can help with editing—full drafting is not allowed without disclosure.
- Human oversight is required, especially in editing, review, and decision-making, to ensure ethical standards and data integrity.
- India’s government is focusing on fairness, transparency, accountability, safety, and privacy for AI governance.
- Proposed frameworks tailor governance based on system risk (high, medium, low), with stricter control and human intervention required as risk increases.
- The session called for inclusive AI frameworks, referencing India's MOU network with 157 think tanks across 90 countries.
- The goal is to avoid elitist technological adoption and ensure AI benefits all social strata from grassroots to urban elite.
- Participants were reminded that AI should assist, not replace humans, in decision-making and content creation.
How Nations Secure AI for Millions | The Trusted AI Framework | AI Impact Summit 2026
The panel at the India AI Impact Summit 2026 brought together leaders from legal tech, global technology providers, automotive digital transformation, and policy governance to discuss the deployment of safe, trusted AI at national scale. The initial discussion emphasized the ongoing evolution of AI governance—from early safety concerns to present adoption-driven regulatory approaches and sector-specific considerations. The panel acknowledged India's unique strengths, notably its world-class talent and success in multilingual foundational model development, but highlighted a significant infrastructure gap as a constraint. Panelists agreed that future policy and governance must be agile and informed by rapid technological progress, with the use of AI itself to support smarter, faster policy-making. Maharashtra’s government collaboration with Nvidia, for reference architectures and deployment across healthcare, banking, and public services, showcases India’s ambition and progress. However, building the necessary AI infrastructure remains the nation's primary challenge to achieving population-scale impact and broad adoption.
- India is prioritizing safe, trusted AI deployment at population scale, balancing innovation with regulatory safeguards.
- Panelists represented Nvidia, Maruti Suzuki, Carnegie Mellon University and IIIT Hyderabad, and policy experts—showcasing a mix of technical, industrial, academic, and governance perspectives.
- Recent shifts in global governance approaches noted: UK’s model (safety institutes monitoring tech, then regulating) contrasted with the EU’s more immediate heavy regulatory stance.
- Accelerated AI adoption has shifted focus from hypothetical risks to real-world sectoral deployment and regulation (e.g., AI in healthcare, legal, and government services).
- Gap between tech innovation (private sector) and state governance is widening; panel suggests use of generative AI and LLMs within government to speed up policy analysis and implementation.
- Nvidia's perspective: India boasts leading AI talent and has successfully trained foundational models in 22 languages, but faces a critical bottleneck in compute infrastructure for large-scale AI deployment.
- India’s government is actively collaborating with industry to develop frameworks and reference architectures for safe, scalable AI.
- Major contrast identified between the Global South (notably India’s eagerness and talent but infrastructure shortage) and Global North (more mature infrastructure but sometimes restrictive regulation).
- Panel anticipates deeper, more agile regulation as AI plays an increasing role in national economic and governance spheres.
How AI Is Transforming Military Sustainment and Readiness
The session at the India AI Impact Summit 2026, led by senior officials from the Indian Army's Corps of Electronics and Mechanical Engineering (EME), focused on the transformative impact of artificial intelligence (AI) and niche technologies on military operations, logistics, and equipment management. Emphasizing the critical need to modernize and 'smartize' legacy weapon systems, the session highlighted initiatives to move from analog, manpower-intensive systems toward AI-enabled, data-driven platforms. These efforts aim to improve predictive maintenance, operational readiness, decision support, and resource deployment through advancements such as sensor integration, data analytics, robotics, drones, and simulation environments. The speakers called on industry and academia for collaboration, particularly in addressing data scarcity, war-time applicability, cybersecurity, and operational reliability. Key objectives included building a unified operational-logistics picture, boosting engineering support velocity via sensor deployment and predictive insights, and indigenizing drone and advanced tech development. The session articulated the vision that future military advantage will rest not on raw numbers but on intelligent enhancement of existing capabilities, leveraging AI and domestic innovation.
- Indian Army leadership outlined a pressing need to upgrade legacy weapon systems (e.g., L70 guns, IFG 1974) using AI and sensor-based 'smartization,' rather than full replacement, for cost-effectiveness and operational continuity.
- AI-driven predictive maintenance is being developed to optimize logistics, enable real-time equipment health monitoring, and maximize combat force readiness while addressing supply chain constraints.
- A three-pronged collaboration request to industry and academia: (1) creating a common operational-logistic picture via multi-source data fusion, (2) deploying robust sensor networks for predictive maintenance and decision support, and (3) advancing indigenous drone and robotics solutions with AI-enabled autonomy and minimal external dependency.
- Key challenges identified: scarcity of high-quality labeled military data, need for AI explainability and human-in-the-loop for compliance with laws of armed conflict, and requirements for strong cybersecurity defenses throughout integration.
- Specific AI applications showcased included autonomous barrel inspection using crawler robots, improved residual useful life (RUL) prediction for artillery, and transitioning manual artillery processes to automated, sensor-integrated platforms.
- Operational edge to come from integrating additive manufacturing, robotics/exoskeletons, and AI-augmented decision support platforms across all domains, including growing domains of cyber and cognitive warfare.
- Strong emphasis on building indigenous institutional AI capability through training, infrastructure upgrades, and ensuring all modernization is Indian-developed, empowering sustained domestic military innovation.
- The session’s structure included keynote and expert perspectives spanning military, academia (IISc), and industry (Tata Elxsi, Deloitte), setting up a holistic 360-degree approach to technological transformation.
AI for Defence: Insights on Strategy and National Security
The keynote session at the India AI Impact Summit 2026, primarily addressed by a Deputy Chief of Army Staff, focused on the integration and ethical governance of AI in India's defense ecosystem. The address underscored the accelerating impact of AI on warfare, battlefield tactics, and military decision-making, while emphasizing the irreplaceable role of human judgment and moral responsibility in the use of advanced technologies. Amidst excitement about AI’s potential, the discussion highlighted recent bilateral collaborations (including outcomes from discussions with France and the US), the release of India’s pathbreaking AI Governance Guidelines, and the Indian Army’s declaration of the current year as the 'Year of Data Centricity and Networking.' Key themes included the crucial need for human oversight, rigorous testing of AI-enabled systems, concerns over data reliability, and preparing military leadership for an era where machines recommend and act at unprecedented speed. The session stressed that AI should be treated as a tool—albeit a transformative one—not as a replacement for human conscience and command. The address concluded with reassurance that India, rooted in its civilization’s ethical ethos, is poised to lead globally in responsible military use of AI.
- Strong turnout and high excitement around AI’s role in India’s defense sector.
- Five out of 21 outcome statements from recent Indo-French talks in Mumbai were defense-related, with one focused specifically on critical and emerging technologies.
- Emphasis on India’s global technology partnerships, including ongoing high-level discussions with the US.
- Release of the India AI Governance Guidelines at the summit, laying out a pragmatic and comprehensive regulatory approach, with national security as a key risk area.
- The Indian Army has declared this year as 'The Year of Data Centricity and Networking', focusing on data as a critical asset for warfare.
- AI to be incorporated across military decision support, surveillance, and operational systems, with active collaborations among military, industry, startups, and academia.
- Stringent focus on ensuring human control is institutionalized in law—not just policy—for all AI military applications.
- Call for AI systems to undergo the same rigorous evaluation, certification, and field testing as traditional weapon systems, especially considering the 'black box' challenges of AI.
- Leadership development to include training in AI’s pitfalls, ethical risks, and integration into command decisions—maintaining ultimate moral responsibility with human commanders.
- Reference to India’s ethos—melding power with restraint (dharma)—as a foundational principle for global leadership in responsible defense AI.
From Access to Impact: Enabling SMEs in the AI Economy| AI Impact Summit 2026
The opening session of the India AI Impact Summit 2026, organized by the India Electronics and Semiconductor Association (ISA), focused on strategies for driving AI adoption, productivity, and market access among Indian MSMEs and startups. Distinguished speakers from leading semiconductor firms and venture arms shared current trends, challenges, and opportunities for scaling AI deployments beyond the prototype stage toward tangible use cases. They highlighted India's unique potential to address local needs—such as privacy, environmental clean tech, and localized edge computing—by shifting emphasis from cloud-centric to device- and premise-centric AI solutions. Panelists acknowledged that while India presently lacks globally scaled deep tech companies, there is ample talent, capital, and government-industry-academia collaboration to foster such innovation. Lessons drawn from global ecosystems stress the need for developing sustained differentiation, integration, and solving meaningful, high-value problems for long-term impact rather than rapid, SaaS-style pivots. The session called for realistic timelines, smarter deployment strategies, and targeted support for MSMEs and startups to realize India's AI potential on its own terms.
- ISA (India Electronics and Semiconductor Association) represents the collective interests of industry, government, and academia to drive semiconductor and AI ecosystem growth in India.
- The central session topic was 'AI for scale: driving adoption, productivity and market access for Indian MSMEs and startups.'
- AI adoption remains uneven across sectors and demographics, with 70% usage in software/content creation but only 20-36% in finance/accounting roles.
- About 50% of certain genres of journalism and other content are now AI-generated, marking a significant shift in content creation practices.
- India’s current AI and semiconductor ecosystem is still nascent; there are no globally scaled Indian deep tech companies in AI or semiconductors yet.
- Key challenge for Indian deep tech: unrealistic SaaS-like expectations clash with deep tech’s need for longer timelines (9-18 months for chip development vs. rapid SaaS pivots).
- Domain-specific, edge-oriented, privacy-conscious AI solutions represent a major untapped opportunity for Indian startups and MSMEs, especially for sectors like surveillance, environment, and public services.
- Global ecosystem lessons: sustainable IP creation, alignment with high-value long-term problems, and robust integration matter more than quick exits or IPOs.
- AI at the Edge (on-device/premises) is viewed as essential for privacy, real-time response, and environmental solutions, as opposed to cloud-centric models dominated by global players.
- AI adoption in manufacturing and MSMEs is presently below 5%, highlighting significant growth potential.
- Collaboration across government, industry, and academic platforms like ISA is seen as key to amplifying India’s AI and semiconductor capabilities.
Automating Bharat: Robotics, Physical AI, and the Future of Make in India | AI Impact Summit 2026
The session focused on the transformative potential of AI and robotics in the Indian manufacturing sector at a critical juncture marked by both technological inflection and urgent geopolitical opportunity. Panelists from NVIDIA and Art Park (IISc Bangalore) emphasized that the traditional definitions of automation are evolving, with the integration of AI turning manufacturing into a software-defined and data-driven endeavor. AI-driven factories can now achieve rapid product iteration, higher quality, and improved efficiency, with global benchmarks like China's fully digital car factories cited as examples. In India, the unique structure of manufacturing, dominated by MSMEs, presents both challenges (such as limited digitization and scalability) and opportunities enabled by the country's deep IT talent pool. Industry leaders called for the development of open-source, modular AI platforms tailored to India’s context, and stressed the importance of robust infrastructure, access to capital, meaningful collaborations, and clear policy stimulus to support MSMEs’ digital leap. Concerns around job displacement were addressed by asserting that competitiveness now requires automation, and the human-centric principles of ‘Industry 5.0’ can help ensure balanced growth. The panel advocated for a dedicated ‘Digital Public Infrastructure’ (DPI) for AI in manufacturing to democratize access to advanced tools and platforms, empowering even the smallest units to participate in the new industrial revolution.
- AI and robotics are redefining manufacturing, making it agile, efficient, and quality-driven by turning physical processes into software-defined ones.
- Reference to global best practices: China’s fully digital car factories achieved operational milestones in just 9 months, compared to traditional timelines of 3 years.
- India’s manufacturing ecosystem is dominated by MSMEs, most of which are not even digitized, yet they are critical to India's $1 trillion manufacturing ambition.
- Adoption of ‘Industry 5.0’ principles to balance efficiency, agility, sustainability, and human-centricity amid automation concerns.
- Indian AI and robotics startups are emerging, supported by initiatives like Art Park (29 startups incubated) and substantial government funding.
- Growing developer ecosystem in India with strong IT talent; open-source and regional customization seen as crucial for success.
- AI-enabled factories allow engineers to model, simulate, and optimize operations remotely, lowering barriers to innovation.
- Panelists highlighted the need for a dedicated Digital Public Infrastructure (DPI) for AI in manufacturing to support MSMEs with modular, accessible tools.
- Explicit emphasis that not automating poses as much risk to jobs and competitiveness as automating, urging industry-wide digital transformation.
How to Ensure AI Quality at Scale Across Billion-User Markets
This session at the India AI Impact Summit 2026 delved into the evolving operating models for AI-driven safety and quality assurance on digital platforms, especially given the explosive growth in AI-generated content. Panelists emphasized the urgent need for proactive, resilient operational architectures utilizing intelligent sampling and automated evaluation frameworks. Leaders discussed the shift from reactive quality checks to proactive systems, highlighting the importance of an 'AI-first' mindset, multidisciplinary skill sets, and the transition from purely operational roles to transformational ones that architect solutions. Regulatory challenges were examined, with suggestions for bridging inconsistencies across jurisdictions through industry consensus, standardization, and the use of reasoning APIs. The panel underlined the trend towards small, specialized teams augmented by AI agents, advocating for common policy standards and practices such as content watermarking to ensure transparency, accountability, and adaptability as AI content proliferates. Community-based approaches and collaborative guardrails were championed as vital for safely scaling AI innovations across global and niche populations.
- Emphasis on proactive quality management through architectures combining intelligent sampling with automated evaluation frameworks.
- The 'AI-first' mindset requires a shift towards experimentation, curiosity, and openness within organizations.
- Three pillars for AI readiness: mindset, skillset (balancing domain and technical expertise), and the ability to transform and architect new systems.
- Regulatory uncertainty, especially across countries, can be reduced by industry collaboration, standard setting, and computational consensus tools (reasoning APIs).
- Community-based models like 'community notes' and forums help adapt AI to niche populations while maintaining global standards.
- Future operational models will comprise specialized human teams, augmented by AI agents, with heavy reliance on advanced policy and model evaluation tools.
- Industry-wide standards (e.g., digital content watermarking) promise to ease compliance and enforcement, reducing operational complexity.
- Small, expert teams guided by shared standards will replace large, unwieldy operational organizations for scaling AI safely.
Equity, Safety & Accountability: Shaping the Future of Fair Tech
The session at the India AI Impact Summit 2026 assembled a globally diverse panel of AI policy leaders and practitioners to interrogate the practical realities of fairness, safety, and trust in the ongoing development and deployment of artificial intelligence. Moderated by Vidhi Sharma, the conversation moved beyond generic principles, diving into the uneven geographic and sectoral distribution of AI capabilities, fragmented accountability structures, and the acute role of trust at the interface between AI systems and real users. Panelists drew on the 2026 International AI Safety Report to evidence low adoption rates in emerging economies (around 10% in working populations), highlighting the compounding risks of this disparity. Trust was recognized as both a technical and human challenge, requiring not just better datasets but coordinated governance between governments, industry, and civil society. A key insight was the inadequacy of technical fixes alone without robust frameworks for accountability and inclusion. The session further explored democracy’s vulnerability to AI transformations, particularly through the lens of electoral processes in the Global South, underlining the urgent need for inclusive rule-setting and active engagement from both technology producers and users. Recent international collaborations, such as EU-India agreements, were cited as positive steps towards multilateral AI governance.
- The 2026 International AI Safety Report highlights that AI adoption rates in emerging economies remain low—around 10% for the working population in Asia, Africa, and Latin America.
- AI capability and benefit distribution is highly unequal; most benefits accrue to countries developing the technology.
- Trust in AI is now a central challenge, surpassing mere capability issues; judgment in when and how to trust AI remains underdeveloped at individual and institutional levels.
- Improved datasets are essential for inclusive AI models, but fairness ultimately requires comprehensive governance involving government, industry, and civil society.
- More than half of surveyed AI system developers estimate a 10% or greater chance of significant failure in AI systems—a risk compared to uncertain outcomes in aviation safety.
- Recent EU-India agreements, exemplified by the events of January 26, 2026, signal an emerging multilateral and middle-power coalition for AI rule-setting.
- AI is actively reshaping democratic processes and media ecosystems, as witnessed in Indonesia’s elections, raising concerns about power asymmetries and the integrity of democratic institutions.
AI Adoption in the Global South: Trust, Technology & Impact
The session at the India AI Impact Summit 2026 brought together leading voices from finance, policy, and technology to discuss establishing trust across the AI value chain as a cornerstone for widespread adoption, particularly focusing on India and the global south. Representatives from JP Morgan Chase, Singapore's Infocom Media Development Authority (IMDA), Anthropic, and the Partnership on AI underscored the urgent need to move from theoretical principles to concrete practices in both AI governance and deployment. JP Morgan Chase highlighted its leadership as both a major deployer and developer of AI, with more than 400 AI use cases in production, emphasizing the critical nature of accountability and outcome-focused governance. Singapore shared its adaptive and iterative approach to regulation, including updated guidelines for generative and agentic AI, as well as the establishment of the AI Verify Foundation to drive technical consensus and multi-stakeholder collaboration. Anthropic announced the opening of its first Indian office in Bangalore and detailed its work on the public 'constitution' for its Claude AI model, which is continuously updated to embed trust and ethics at the core of its technology. All speakers agreed that building trust requires transparent, interoperable standards and frameworks, industry-wide collaboration, and centering the human experience—particularly to drive adoption in emerging markets. The Partnership on AI also previewed the release of two forthcoming reports on trust and assurance ecosystems for policymakers. The overall consensus was that collaborative, dynamic, and outcome-focused governance is essential for responsible scaling of AI both within India and globally, ensuring that innovation serves broad societal and economic interests without fragmentation.
- JP Morgan Chase reported having over 400 AI use cases live in production, focusing on risk, customer experience, and fraud prevention.
- Responsibility and accountability between AI model creators (developers) and deployers (users) were emphasized as foundational for trust.
- Singapore's IMDA has updated its Model Governance Framework for generative and agentic AI and launched the AI Verify Foundation to bring together developers, deployers, and testers.
- Regulatory frameworks should be adaptive, outcomes-based, and interoperable across borders to avoid market fragmentation, especially important for scaling innovations from countries like India.
- Anthropic announced its first Indian office in Bangalore and highlighted India as the #2 country in Claude AI usage globally.
- Anthropic introduced its public 'constitution' for Claude, a living document guiding AI ethical behavior and compliance, with an update to address agentic AI.
- The Partnership on AI brings together 140 partners across 18 countries and will soon release two reports aimed at helping policymakers build robust trust and assurance ecosystems.
- Robust AI governance must integrate technical standards, transparency, auditability, and the capacity to explain and justify automated decisions in highly regulated sectors like finance.
- Building trust in AI is not only a safety measure but a key enabler for adoption and economic growth—particularly for the global south and smaller states.
The National AI Stack: From Compute to Commercial Impact| AI Impact Summit 2026
📅 Sessions from 2026-02-19
PM Modi & Global Leaders arrive at AI Impact Summit | Shaping the Future of AI Together
The India AI Impact Summit 2026 commenced with an address by Prime Minister Narendra Modi and distinguished global leaders, positioning India at the forefront of responsible and inclusive AI innovation. The event, attended by delegates from 118 countries, underlined India’s commitment to democratizing AI across five layers—application, model, compute, infrastructure, and energy—emphasizing public access to AI resources, sovereign model development, affordable compute for startups, robust digital infrastructure, and substantial investments in clean energy. Key policy shifts were highlighted, including state-backed data center growth, major public-private partnerships, and reforms to attract and process global data in India. The Tata Group announced collaborations with OpenAI and AMD to establish India's first large-scale AI-optimized data center, targeting an initial 100MW capacity scalable to 1GW, and unveiled efforts in industry-specific AI solutions, agentic operating systems, and domain-centric chips. The summit showcased India’s drive to make AI accessible for all, focusing on human-centric, ethical, and developmental goals aimed at transforming public services and enterprise productivity, while setting global standards for inclusive AI governance and innovation.
- The summit is attended by delegates from 118 countries, making it the largest and first AI summit in the Global South.
- India outlined a five-layer AI stack strategy: applications, sovereign models (including many multimodal/multilingual models launched at the summit), public compute access (offering 38,000 GPUs now, with 20,000 more to come), infrastructure emphasizing local and global data center investments, and clean energy powering over 50% of India’s generation.
- Major policy shift: India's budget announcement encourages global data to be stored, processed, and serviced in India for value-added services worldwide.
- Tata Group is establishing India's first large-scale AI-optimized data center (100MW, scaling to 1GW) in partnership with OpenAI and AMD.
- Tata Group is building industry-centric AI solutions, including an AI data insights platform based on diverse Indian datasets and an AI operating system for enterprises.
- Government and industry are jointly driving upskilling/reskilling with academia to prepare India’s workforce for the AI era.
- India treats compute as a public good, ensuring affordable AI resources for startups, researchers, and students.
- Emphasis on responsible, human-centric AI for social good (‘AI for Good’), inclusion, democratic values, and ensuring the Global South is a co-author of the AI age.
- Focus on further reforms in clean/nuclear energy infrastructure to fuel AI growth sustainably.
- Stanford University placed India among leading nations in AI adoption, talent, and diffusion.
Language as Digital Infrastructure: Enabling Inclusive AI Across Communities
The session at the India AI Impact Summit 2026 focused on the challenges and imperatives of scaling voice-enabled AI systems for India's diverse and massive population. Sunil Gupta, co-founder and CEO of Yota, highlighted the vital interplay between AI model infrastructure and the realities of India's mixed telecom backbone—spanning both data and legacy PSTN networks. The discussion emphasized the transition from prototype to production, arguing that real-world deployment demands robust, redundant, and scalable compute and network resources. Key technical requirements include ultra-low latency, multilingual and code-mixed language support, and dynamic, cost-effective scaling, especially during demand spikes. This is compounded by the need for regulatory compliance and edge data centers to meet real-time, hyper-local user demands. Sovereign clouds, particularly those with data localization and tenant separation, were proposed as the foundation for widespread and compliant deployment of voice AI at scale.
- Voice AI in India must operate across both modern data networks and legacy PSTN telecom infrastructure, each with unique challenges and constraints.
- Scalability from pilot to mass deployment involves addressing millions of concurrent users, with a strong emphasis on reliability and ultra-low latency.
- Voice-based interactions are highly sensitive to delays; user expectations demand near-instant responses akin to human conversation.
- Multilingualism and code-mixed language usage (Hindi, English, local dialects) significantly increase system and model complexity.
- Edge data centers (localized compute/storage) are identified as a critical enabler for meeting stringent latency requirements at population scale, allowing processing closer to users in regions like Guwahati versus central data centers.
- Dynamic and cost-efficient compute scaling is essential: cloud models must enable elastic GPU allocation to manage variable traffic (e.g., festive spikes).
- System reliability demands fully redundant (active-active) network, compute, and storage architectures to ensure uninterrupted service despite hardware or network failures.
- Adoption of sovereign clouds is advocated for data protection, regulatory compliance (e.g., India's DPDP Act), and avoidance of conflicts with foreign jurisdiction, while enabling both central and edge deployments.
- Real-life deployments are already addressing these challenges, moving beyond theoretical discussion and into tangible practice.
AI in Academia: Shaping the Future of Education and Research
This session at the India AI Impact Summit 2026 focused on the transformative influence of artificial intelligence (AI) within higher education, highlighting both opportunities and challenges. Panelists included renowned academics, legal experts in AI law, industry leaders, and entrepreneurs who collectively underscored that AI is fundamentally rearchitecting how institutions operate, how faculty teach, and how students learn. ILM University showcased its comprehensive institutional approach of embedding AI and productivity-enhancing tools across all disciplines, not only in computer science but also in humanities and other fields, starting from the upcoming academic session. The discussion emphasized the necessity for both students and educators to be adaptive, agile, and critically engaged, while also cautioning against the risks of overreliance on AI for core cognitive processes. Moreover, the session revealed a growing alignment between industry demands and academia’s evolving curricula, where institutions are providing AI upskilling and certifications—often in partnership with major tech companies—to both faculty and students. The overall tone pointed to urgency: integrating AI is not optional but essential to institutional survival, workforce readiness, and ethical leadership development.
- AI is now a present reality in higher education, affecting pedagogy, research, and institutional operations.
- ILM University has overhauled its syllabi to embed AI-driven tools and learning in all disciplines, instituting a six-part course structure where the final part involves applying AI to previously learned concepts.
- From the coming academic session, the university will adopt AI integration not just for computer science, but across humanities and other departments.
- Industry expectations are accelerating: tasks that once took 30 days are now expected to be completed in three, necessitating a more robust integration of AI skills.
- Faculty training is key; ILM is partnering with companies like Microsoft, Google, and Nvidia to certify and upskill teachers in AI, leveraging a train-the-trainer model.
- Students are encouraged to use AI as a productivity tool but warned against overreliance that may erode critical thinking and problem-solving abilities.
- Interviews and recruitment by companies are increasingly automated through bots, prompting institutions to prepare students for AI-driven hiring processes.
- The discussion highlighted the existential risk for institutions that fail to adapt to the AI-driven paradigm shift.
Guarding the Consumer: AI for Safety, Resilience, and Protection
The session, led by Kunal Wala (Dalberg) at the India AI Impact Summit 2026, focused on the urgent challenge of technology-driven financial fraud in India amidst rapid digital financial inclusion and AI adoption. Panelists from Mastercard, the Gates Foundation, Bharti Airtel, and behavioral science sectors underscored how India's pioneering digital payment infrastructure (DPI, UPI) and high financial inclusion rates have simultaneously created vast opportunities for fraudsters, especially with the exponential rise of generative AI (GenAI) enabling deepfake-driven scams and highly tailored attacks. Notable statistics highlighted the scale of the crisis: 30,000 crore INR lost annually to fraud in India, with financial fraud outpacing all other sectors in growth. The panel discussed pressing issues including the lack of real-world enforcement of consumer protection regulations, the industrialization and cross-platform sophistication of scams, and the critical interplay between technology solutions and behavioral interventions. Mastercard's approach emphasizes privacy, transparency, and protection as core pillars, with 'inclusion by default' and 'defense by design' as guiding principles. Airtel addressed the surge in both the scale and sophistication of frauds, while experts stressed the need for coordinated, cross-channel responses and trust-building to protect vulnerable populations and sustain digital economic growth.
- India sees 30,000 crore INR lost to financial fraud annually—equivalent to 6 lakh INR per minute.
- Over 900 million Indians are digitally included; more than half use UPI for trillions in daily transactions.
- Globally, $1.3 trillion lost to scams in the last year; India accounted for $48 billion.
- 75% of Indians have encountered scams or fraud in the past year; average individual faces 119 scam attempts across channels.
- Generative AI (GenAI) has drastically reduced the marginal cost and increased the scale and customization of financial scams (e.g., deepfakes, AI-voice scams).
- Current consumer protection regulations are strong on paper but weakly enforced in practice.
- Mastercard achieved its goal of bringing 1 billion people into the formal economy by 2025; prioritizes privacy, transparency, and protection as responsible AI tenets.
- Sophisticated, cross-platform scams are now run at an industrial scale, transitioning from individualized crimes to organized operations.
- Behavioral change strategies must complement technological solutions to build trust and resilience among users.
- Despite near-universal bank account ownership (90% for adults), only half actively use them—trust and safe usage remain challenges.
Enterprise AI 2026: Driving Efficiency and Innovation at Scale
The session introduced 'Hacksale', a platform designed to power innovation and accelerate product adoption by providing AI-driven code and product evaluation services. The presenters emphasized the growing autonomy available to innovators in diverse fields—marketing, healthcare, education, and finance—citing a surge in startup creation as a result. Hacksale addresses a key challenge for innovators: evaluating whether their products are market-ready, viable, secure, and scalable. The platform supports over 20 programming languages and frameworks, offering automated, multi-agent benchmarking for code, business viability, scalability, and security. Its dynamic agent-based evaluation system allows for tailored analysis based on submission formats—including code, presentations, and videos. Notably, the platform is already integrated into large hackathons (including a Guinness World Record event for Agentic AI) and supports major industry and government innovation drives. For enterprises, Hacksale can reduce the manual evaluation time of submissions by up to 90%, enabling rapid, in-depth analysis and facilitating better identification of high-potential ideas. Security and IP integrity are addressed through isolated evaluation environments and stringent data retention policies, ensuring that proprietary information remains confidential.
- Hacksale enables domain experts across industries to autonomously build, evaluate, and scale digital tools and products.
- AI-based multi-agent system benchmarks and scores submissions on criteria such as code quality, security, scalability, and business viability.
- Supports over 20 languages and frameworks including React, Python, Rust, and Go, and evaluates multiple submission formats (code, PPT, video, documents).
- Successfully hosted a Guinness World Record Agentic AI hackathon with 700+ prototypes deployed on GCP in 30 hours in Bangalore (2025).
- Powers the Smart India Hackathon in partnership with the Ministry of Education (AICTE).
- For enterprises, automates bulk evaluation of thousands of submissions with a time reduction of up to 90% (from up to 60 min to 3-5 min per submission).
- Seamless integration available with collaboration and CI/CD tools (e.g., Discord, Slack).
- Robust workflow and IP security: isolated, disposable evaluation instances and strict data access limitations to protect proprietary ideas.
AI Quality Compliance Specialist
The session showcased a comprehensive AI-powered admissions and compliance platform from Leverage, revolutionizing the university application process for students, educational consultants, and universities. The platform integrates three key AI products: a Quality Compliance Specialist for instant document verification and fraud detection, an AI Interviewer for automated, criteria-based candidate assessment, and 'Vasu', an AI counselor for personalized course/university shortlisting through WhatsApp. By orchestrating all application steps—from document checks, offer letters, to interviewing—through a single AI-driven pipeline with human-in-the-loop validation, Leverage claims dramatic improvements: a 20x productivity gain, 5x higher enrollments, 90% reduction in process time, and over 20,000 hours saved on interviews. The system is deployed with 850+ global universities, facilitates direct university collaborations for real (not just mock) interviews, and supports 10,000+ consultants in 27 countries. While AI handles research, compliance, and interview scheduling/scoring, human counselors retain the crucial personalized advisory role. Questions from the audience highlighted concerns about the potential for generic 'cookie-cutter' outputs and the loss of human touch, which the panel addressed by emphasizing AI's enabling role for large-scale democratization and the indispensable value of human empathy in final advisement.
- Leverage launched an integrated AI admissions platform combining document compliance, instant offer letters, and automated interviews.
- AI Quality Compliance Specialist reduces application processing time from 6-7 weeks to under 2 hours by instantly verifying documents and eligibility.
- AI Interviewer provides randomized, dimension-specific interview assessments, enabling mass-scale candidate evaluation.
- Vasu, the AI counselor, operates on WhatsApp 24/7, aiding personalized university/course shortlisting and refining results based on individual criteria.
- The platform offers a unified orchestration layer ensuring data integrity, auditability, and seamless workflows via a human-in-the-loop model.
- Impact metrics include: 20x productivity gain, 5x higher intake enrollment, 90% reduction in process turnaround time, 20,000 hours saved in interviewing, and hundreds of thousands of students impacted monthly.
- Over 850 university partnerships globally, with live, production-ready integrations for direct university admissions and interviews.
- Supports 50 Indian universities with online courses, and a consultant network of 10,000+ spanning 27 countries.
- Audience questions addressed concerns over personalization and the implications of AI standardizing interview prep, with Leverage emphasizing that while AI handles scale and compliance, human counselors provide nuanced advice and empathy.
Shaping Secure, Ethical, and Accountable AI for a Shared Future
The panel session at the India AI Impact Summit 2026 centered on the urgent need for AI accountability, trust, and responsible use as the technology is increasingly integrated into daily life, national infrastructure, and identity. Multiple panelists from government, tech giants (Microsoft, Google/Mandiant), consultancies, and innovators highlighted the transition from AI proofs of concept to broad, scaled deployment, raising critical dilemmas such as innovation vs. regulation, sovereignty vs. interoperability, and claimed vs. verified trust. India’s experience building digital public infrastructure (DPI) on the pillars of trust, safety, accountability, and resilience was emphasized, alongside the rapid expansion of the national AI mission, now backed by multi-billion dollar investments. Industry leaders discussed practical implementation: from exclusive cultural IP licensing for digital avatars to enterprise-level security and confidential computing, underscoring the spectrum of AI’s impact and the need for new standards. The discussion called for policy, technology, and ecosystem alignment to ensure AI’s rollout is secure, privacy-preserving, and culturally respectful, with a strong emphasis on verified trust and stakeholder responsibility.
- AI accountability, trust, and responsible use are the central themes as AI becomes integral to daily life and national infrastructure.
- India’s digital public infrastructure was built on foundations of safety, trust, accountability, and resilience, now extended to AI policy.
- The national AI mission, conceived in 2022, was scaled up to multi-billion dollar funding following direct input from India’s Prime Minister.
- Three key dilemmas frame the AI dialogue: Innovation vs. regulation, sovereignty vs. interoperability, and claimed vs. verified trust.
- Enterprise leaders emphasized urgent need for measurable, verifiable standards for AI privacy, security, and responsible deployment as projects move from pilot to large-scale roll-out.
- Exclusive AI rights licensing for cultural icons highlighted new legal and ethical territory; the panel described perpetual digital custodianship for high-profile personalities (e.g., Amitabh Bachchan).
- Technologists championed confidential computing and zero-trust security models for AI workloads, with legal and technical guarantees to prevent privacy breaches.
- Consultancies underscored the importance of translating frontline global intelligence on cyber threats into local resilience.
- Interplay between policy, technology, and ecosystem was identified as essential for AI to deliver benefits while maintaining societal trust.
- AI is seen as both a transformative opportunity and an existential threat to traditional business and governance models, necessitating rapid adaptation and oversight.
From Data to Innovation: Creating AI-Ready Infrastructure
The session at the India AI Impact Summit 2026, focused on 'safe AI', moved the conversation beyond abstract safety principles to the critical challenges of implementing AI safety and infrastructure at scale in India's complex socio-cultural context. Panelists Akash Kapoor, a prominent tech policy commentator, and PK, a cyber security and privacy expert from IIT Hyderabad, discussed how technology’s deployment collides with real-world complexity—especially regarding inclusion, language diversity, and institutional accountability. The discussions highlighted that 'AI safety' is an evolving and multifaceted concept, encompassing not just existential risk but also social equity, linguistic access (including code-mixing common in India), responsible deployment, and continuous, context-sensitive oversight. The importance of ongoing, temporal approaches to safety (continuous audits, certifications) was emphasized, rather than static one-time checks. The session also offered examples of practical challenges, such as safety issues in AI agricultural chatbots, and delved into the difficulties of developing safety benchmarks in India’s hyperlocal, multilingual environment. Institutional accountability emerged as a central theme, with panelists emphasizing the need for distributed frameworks—public, private, and regulatory—to manage incentives and rapidly changing technologies, all informed by lessons from the internet era’s policy shortfalls.
- Shifted AI safety discussion from general principles to practical implementation in the Indian context.
- Highlighted that 'AI safety' now covers social inclusion, linguistic diversity (including complex code-mixing), and socio-cultural access barriers.
- Stressed that AI safety requires ongoing, temporal controls—not just static, initial compliance—with mechanisms like continuous audits and certification.
- Illustrated real-world issues such as agricultural AI chatbots producing problematic content, showing the unique safety risks at India’s population scale and the bottom of the pyramid.
- Emphasized that globally framed safety benchmarks often miss India's hyperlocal, context-driven challenges, particularly in multilingual and non-text (voice) interactions.
- Cited examples of institutional positive incentive alignment, such as Anthropic donating the MCP protocol to the Linux Foundation to create public goods.
- Argued for distributed, robust accountability structures—beyond simply targeting developers—including public policy, regulation, and collaboration between stakeholders.
- Urged policy makers to apply lessons from previous technological deployments, particularly around balancing incentives and the public good.
AI Standards and Global Prosperity: Navigating Agentic AI
The session at the India AI Impact Summit 2026 gathered leading voices in global AI governance to discuss the increasingly complex challenge of setting standards and ensuring responsible deployment of advanced AI, particularly agentic (autonomous action-taking) systems. Moderated by Ashley Caspen of the AI Governance Center at AP, panelists Rachel Adams (Global Center on AI Governance/University of Cambridge), John Dickerson (Mozilla AI), and Emanuel Kway (VDE UK/European AI Standards Committee) offered cross-continental perspectives on balancing policy, technical standards, and social implications. The conversation revealed how standards, far from being 'one size fits all,' are shaped by regional priorities, resource imbalances, and evolving concepts such as fairness, accountability, and transparency. Concrete instruments like the Global Index on Responsible AI and European AI Act-driven standards were discussed, highlighting both their potential for peer-driven policy action and their limitations—especially for under-resourced regions or organizations. The consensus: meaningful AI governance must be nuanced, context-sensitive, and collaborative, as the next wave of AI systems becomes ever more embedded in societal functions.
- Ashley Caspen (AI Governance Center, AP) emphasized the emerging need for clear skill definitions and standards for AI and digital governance professionals, particularly around agentic AI.
- Rachel Adams described the Global Index on Responsible AI —launched in 2024 and with a new edition in July 2026—as a tool for benchmarking government progress and promoting peer-driven policy action, noting that the first edition deliberately omitted generative/agentic AI due to lack of global best practices.
- Adams underscored how global standards processes often inadvertently favor resource-rich nations and organizations, creating high barriers for less-resourced players and risking a 'gold standard' market filter.
- John Dickerson (Mozilla AI) stressed Mozilla's role in advocating for open source, decentralization, and privacy in an era when browser and AI platforms risk re-centralizing power over internet access.
- Emanuel Kway (VDE UK) detailed the process of operationalizing consensus into practicable standards for the European AI Act, highlighting the struggle to define concepts like 'fairness' across nations and sectors and the unique decision to integrate standards-setting from the start of regulation.
- European standards committee 'Jets 21' was tasked with translating aspirational policy goals into concrete, consensus-driven technical standards involving experts from law, healthcare, defense, and other domains.
- Panelists agreed that successful standards must be adaptable to context, sector, and geography, cautioning against rigid or exclusionary approaches.
Institutional Intelligence: Preparing Global Organizations for an AI-First World
The session at the India AI Impact Summit 2026 focused on the critical transformation required in India’s education ecosystem to effectively respond to the rapid advancement of artificial intelligence (AI). Panelists emphasized the need for educational institutions to evolve from being mere consumers of AI products to becoming creators and innovators, ensuring that future graduates are not only AI-literate but also capable of driving India's AI revolution. The discourse highlighted the urgency for curriculum, pedagogy, faculty capabilities, and assessment methods to adapt swiftly, stressing industry-academia collaboration to bridge the skills gap and match the pace of technological change. The session offered practical case studies—such as AI-powered exam evaluation and admissions processes—to illustrate the tangible benefits of AI integration, and called for India to move beyond offering only coding talent to the world, advocating for greater product and knowledge creation. There was a consensus on the dual responsibility shared by higher education, industry, and government in fostering a sustainable, scalable AI-ready workforce and nurturing homegrown AI innovation.
- The AI economy requires both advanced technology and future-ready talent; neither alone is sufficient.
- India’s educational institutions must shift focus from consumption to creation of AI knowledge and products, per national leadership guidance.
- AI can tangibly improve institutional processes, e.g., using AI for large-scale exam paper evaluation and fairer, more transparent admissions.
- Professor Alok Kumar Ray implemented and piloted AI-driven tools at IIM Kolkata to supplement manual assessments, offering students multiple evaluation options for fairness.
- Industry-academia collaborations are pivotal to keep curricula and skill development abreast of fast-evolving AI trends and employer needs.
- There’s a strong push for skill-based over degree-centric learning, as the Indian education policy evolves to meet new technology demands.
- Panelist from Accenture Learn Vantage emphasized the necessity for academic programs to be agile given the unpredictable rapid evolution of AI specializations (e.g., agentic AI, adaptive AI).
- AWS India’s role as a skilling backbone was underscored, highlighting large-scale partnerships to drive an AI-ready workforce.
- Importance of government, industry, and academic partnership in building a sustainable, scalable AI-skilling ecosystem.
Masterclass 'Enterprise AI in Action'
The session at the India AI Impact Summit 2026 highlighted efforts to democratize artificial intelligence for both enterprises and rural India by introducing affordable, accessible AI tools and platforms. Key announcements included the rapid deployment of 9,000 AI agents in a corporate setting, the launch of 'team space' for coordinated enterprise AI usage, and a disruptive pricing model inspired by Jio, offering enterprise AI solutions for ₹99,500 per month for up to 50 users. The speakers emphasized AI as an equalizer for underserved populations and MSMEs, showcasing success stories from rural entrepreneurship workshops where villagers leveraged generative AI for small-scale business creation. Furthermore, the session featured advancements in inclusive 'natural language banking,' designed to overcome technology barriers by enabling seamless, multi-lingual interactions with financial institutions, thereby bridging gaps for less tech-savvy users. The rollout of 'team spaces' and the 'purple fabric' agentic platform marks a shift from individual productivity tools to collective business outcome delivery, ensuring knowledge retention, scalability, and inclusivity for organizations of all sizes and across sectors.
- Within 3 months, 9,000 AI agents were created and deployed internally in a single corporate entity.
- Introduction of 'team space'—a collaborative AI workspace—aims at consolidating AI agents and supporting team-based business outcomes.
- Major price disruption akin to Jio: Launching enterprise AI platform for ₹99,500 per month (for up to 50 users), making sophisticated AI accessible at less than half the cost of a single AI engineer.
- Emphasis on democratizing AI for MSMEs, chartered accountants, law firms, small businesses, and rural entrepreneurs without requiring dedicated AI engineers.
- Successful rural outreach: Over 1,800 villages and 2 million people engaged, with entrepreneurs using free AI tools for launching local products (e.g., chocolates, soaps) and business branding.
- Exhibition of an Enterprise AI Design Center, offering immersive transformation experiences in 30 minutes.
- Launch and demonstration of natural language banking that supports multilingual, inclusive, and personalized banking experiences powered by over 25 AI agents behind the scenes.
- Rollout of 'team spaces' built on the 'purple fabric' platform, ensuring collective productivity, contextual knowledge retention, and seamless team transitions for enterprises, banks, HR, and insurance.
- 95% of AI agent users are consumers (not creators), driving focus on easy provisioning and consumption over individualistic development.
- Mission-level advocacy to promote AI adoption as a 'great equalizer' in rural and urban contexts, bridging social and economic divides.
Powering the AI Boom: Accelerating Global Data Center Infrastructure
The session at the India AI Impact Summit 2026 focused on the growing integration of AI and automation in banking and financial services, emphasizing the evolution toward truly autonomous operations. Speakers highlighted industry-wide transformations, with automation permeating all customer lifecycle stages—need discovery, exploration, purchase, use, and service. The discussion identified four key pillars driving this future: invisibility of banking services, hyper-connectivity through open banking and integrated digital/physical experiences, insights-driven decision-making supported by AI-generated intelligence, and increased purposefulness anchored in customer trust and values. India's progressive digital public infrastructure (like UPI, Aadhaar, ULI, OCEN, TReDS) was recognized as a crucial enabler for these transitions, facilitating new models like flow-based lending and SME financial inclusion. Business Next showcased its AI-powered platform, powering over a billion daily interactions, and emphasized the necessity of ethical AI and regulatory guardrails for trust. The session set the stage for an industry panel, reinforcing that India is at the vanguard of AI-enabled financial transformation both for its domestic market and globally.
- Banking and financial services are rapidly evolving toward autonomous, AI-powered operations across all customer lifecycle stages.
- Four future-defining trends were identified: invisible (seamless) banking, increasing connectivity, insights-driven services, and purpose-driven engagement.
- India's digital public infrastructure is expanding beyond UPI and Aadhaar to include ULI (Unified Lending Interface), OCEN (Open Credit Enablement Network), and TReDS (Trade Receivables Discounting System), broadening credit access especially to MSMEs.
- Shift from asset-backed to flow-based lending is empowering small merchants and businesses via alternative data and digital verification.
- AI-powered customer trust, personalization, and ethical guardrails are critical as automation increases; banking institutions must maintain and enhance customer relationship values.
- Business Next's platform facilitates interactions for over a billion customers daily, deeply embedded in India’s financial ecosystem across banking and insurance.
- Emphasis was placed on regulatory activism and the need for ethical use of AI in banking, ensuring responsible and trustworthy financial innovation.
- India is leading the global fintech and AI-in-banking wave, with Business Next’s recognition as one of the country's pioneering firms.
- Industry panel to further discuss practical AI adoption strategies and measurable business outcomes for financial institutions.
Leaders’ Plenary | Global Vision for AI Impact and Governance l AI Impact Summit 2026
The India AI Impact Summit 2026 featured a series of high-level addresses by global leaders, highlighting AI’s transformative potential and the critical importance of human-centric, ethical, and inclusive governance. Leaders from India, Estonia, Serbia, Slovakia, Sri Lanka, and Switzerland addressed themes of trust, digital sovereignty, responsible innovation, access to digital infrastructure, and the need for multilateral cooperation. Estonia announced its ASD.AI and AI Leap initiatives to systematically embed AI and boost societal AI literacy, positioning itself as a testing ground for responsible AI. Serbia and Slovakia stressed the political and infrastructural challenges of concentrated technological power, focusing on sovereignty and the imperative of building local AI capacities. Slovakia announced significant developments in sovereign AI infrastructure and low-carbon compute, including launching its supercomputer Peron and hosting the Bratislava AI Forum. Switzerland, emphasizing Geneva’s ecosystem of digital governance, committed to host the 2027 global AI summit. Throughout, speakers rallied for frameworks built on ethics, transparent and trusted data, democratization of AI skills, and measuring AI’s success by its human impact rather than technical metrics.
- India positioned itself as a leader of an inclusive, human-centric global AI ecosystem, emphasizing digital infrastructure and responsible governance.
- Estonia launched the ASD.AI program and AI Leap initiative, aiming to integrate AI across the economy and boost AI literacy for students and teachers.
- Estonia aspires to be a global testing ground for responsible AI and is co-facilitating the UN’s global AI governance dialogue.
- Serbia underlined the geopolitical risks of concentrated AI infrastructure and committed to investing in local research centers and regulatory frameworks.
- Slovakia announced the AI Factory project, a sovereign data center initiative leveraging low-carbon energy, and launched its supercomputer 'Peron' for national AI capacity.
- Slovakia hosted the Bratislava AI Forum with the OECD in November 2025 to focus on AI and education.
- Switzerland highlighted the role of Geneva as a global AI governance hub, leveraging its multilateral ecosystem of agencies and research institutes.
- Switzerland officially announced its commitment to host the 2027 AI Summit in Geneva.
- All speakers underscored the necessity of trust, ethical frameworks, transparent data governance, and inclusivity in future AI development.
- Themes of sovereignty, transparency, and the democratization of AI skills, tools, and access recurred across sessions.
AI Impact Summit 2026 l Keynote Speakers
The session at the India AI Impact Summit 2026 showcased India's vision to become a leading global force in artificial intelligence, with prominent industry leaders such as Sunil Mittal and Shantanu Narayen emphasizing the transformative role of AI across sectors, the importance of open standards, and the challenges around democratization versus consolidation of AI resources. The keynote by Mukesh Dhirubhai Amani, Chairman and Managing Director of Reliance Industries, was a landmark declaration: Reliance Jio announced a comprehensive AI strategy backed by an unprecedented investment of ₹10 lakh crores over seven years to establish sovereign AI compute infrastructure, affordable nationwide AI access, and partnerships across the innovation ecosystem. Detailed plans involve multi-gigawatt AI data centers, extensive deployment of edge compute, AI applications in vital sectors like education and healthcare, and a strong emphasis on multilingual and inclusive AI. The summit reinforced India’s commitment to leveraging AI as a tool for equitable progress, anchoring India as a global bridge between the North and South, and sparking a people-centric AI movement built on collaboration, openness, and social relevance.
- India positions AI as central to national progress toward becoming a fully developed nation by 2047.
- Reliance Jio will invest ₹10 lakh crores (~$120 billion) over the next 7 years in AI infrastructure and ecosystem development.
- Three-pronged infrastructure plan: gigawatt-scale AI-ready data centers (120MW online in late 2026, with Jamnagar as a key site), an in-house surplus of 10 GW green energy, and deployment of nationwide edge compute closely integrated with Jio's network.
- Jio pledges to make intelligence as ubiquitous and affordable as connectivity, reducing the cost of AI access similar to how data costs were slashed.
- Commitment to world-leading multilingual AI, supporting all Indian languages for true inclusivity and accessibility.
- Jio launches AI-driven solutions such as: Geo Shikshak (adaptive AI teaching assistant in 22 languages), Jio Arogya AI (instant medical guidance), Jio Krishi (satellite and weather-based agritech guidance), and Jio Bharat IQ (voice-first AI for learning and access to services).
- Reliance calls for global open standards in AI, reflecting India's stance against proprietary control of AI resources and for democratized opportunity.
- Plans for deep partnerships across Indian enterprises, startups, and leading research institutions to co-develop transformative AI use cases.
- AI is framed as a job creator, not a disruptor, with new high-skill opportunities anticipated.
- India highlighted as a 'vital bridge' between the Global North and South for AI collaboration, and open, people-centric approach emphasized as the future direction.
PM Modi takes part in Leaders’ Plenary Session during the India AI Impact Summit 2026
In his address at the Leaders’ Plenary of the India AI Impact Summit 2026, the Prime Minister emphasized India's commitment to building a human-centric, globally sensitive AI ecosystem. He recalled humanity’s history of transforming disruptions into opportunities and called for collaboration to turn the AI revolution into the greatest opportunity for mankind. Drawing lessons from India’s successful digital initiatives during the COVID-19 pandemic, such as the vaccination platform and UPI, he underlined the nation's role in making technology a medium of service, not power. The Prime Minister laid out India’s vision for accessible and inclusive AI, especially for the Global South, and stressed the importance of ethics as AI’s influence expands. He put forward three key recommendations for ethical AI: establishing a global trusted data framework respecting data sovereignty; ensuring safety rules for AI platforms are transparent and verifiable—favoring a ‘glass box’ over a ‘black box’ approach; and embedding clear human values and guidance within AI systems. Announcing India's tangible progress, he revealed the deployment of 38,000 GPUs, with plans for an additional 24,000 in the next six months, and the creation of an AI corpus offering over 7,500 datasets and 270 AI models as national resources. The session concluded with a visionary call: AI must foster innovation, strengthen inclusion, and seamlessly integrate human values to maximize its impact for all humanity.
- Summit aims to build a human-centric, globally sensitive AI ecosystem.
- India’s digital initiatives during COVID-19 (digital vaccination platform, UPI) showcased technology as a medium of service and reduced the digital divide.
- India shares its digital public infrastructure globally and champions technology for empowerment, not domination.
- AI vision prioritizes accessibility and inclusion, especially for the Global South, making AI available to all.
- Ethics must be central to AI governance, as the scope for unethical behavior in AI is limitless.
- Three suggestions for ethical AI: 1) Creating a global trusted data framework respecting data sovereignty; 2) Ensuring AI safety rules are transparent (‘glass box’ approach) to boost accountability; 3) Embedding clear human values and guidance in AI development.
- India has already deployed 38,000 GPUs and plans to add 24,000 more in the next six months.
- An AI corpus with over 7,500 datasets and 270 AI models is now available as a national resource.
- India's stance: AI is a shared resource for the benefit of all humanity, with a focus on innovation, inclusion, and human values.
Inside the AI Impact Expo | PM Modi & Global Leaders Explore Future‑Defining AI Innovations
The India AI Impact Summit 2026 in New Delhi marked a pivotal moment for the Global South and the global AI community, attracting over 500 global AI leaders, including 20+ heads of state and government, such as French President Emmanuel Macron and UN Secretary-General António Guterres. The event showcased international collaboration, with country pavilions and participation from leading AI companies like 'Sarvam', 'Bharat Jain Gyani', and 'Socket', all endorsing the 'New Delhi Frontier AI Commitment'—a pledge for inclusive and responsible AI development. Prime Minister Narendra Modi emphasized the transformational phase India is leading, the significance of responsible AI for humanity, and the vital role of tech diplomacy. The summit underscored India's leadership in shaping the future of AI, focusing on innovation, responsible growth, and addressing global challenges such as healthcare, education, digital innovation, and climate change.
- Over 500 global AI leaders and more than 20 heads of state and government attended the summit, signaling unprecedented global engagement.
- The New Delhi Frontier AI Commitment was announced, with key Indian AI companies including Sarvam, Bharat Jain Gyani, and Socket pledging to promote inclusive and responsible AI.
- UN Secretary-General António Guterres and French President Emmanuel Macron were among the prominent world leaders attending, providing international endorsement.
- Country pavilions from around the world highlighted cross-border partnerships and global AI innovation.
- Prime Minister Narendra Modi called for making AI more sensitive, and highlighted AI's role in human welfare, digital innovation, and global partnership.
- The summit was framed as a defining moment for the Global South and a new era for tech diplomacy, with India asserting leadership in responsible AI development.
- Deliberations included themes of economic growth, digital transformation, healthcare, climate change, and education, with a focus on collaborative solutions.
- The event emphasized safe, transparent, and inclusive AI growth to benefit all segments of society.
AI Impact Summit Family Photo | United for Responsible AI l India AI
The session highlighted India's remarkable digital transformation over the past decade, with specific emphasis on the widespread adoption of mobile technology and the Unified Payments Interface (UPI) even at grassroots levels. French President Emmanuel Macron praised India’s digital journey, noting substantial growth in mobile usage and digital payments, including among informal workers. The discussion underscored India's role as a global leader in digital innovation, now setting the pace in artificial intelligence (AI) as well. References were made to Prime Minister Modi’s promotion of a 'human-centric vision' for AI, positioning India not only as a frontrunner but also as a guiding force for other nations in the responsible development and deployment of AI technologies.
- French President Macron lauded India's rapid digital progress, especially the growth in mobile and UPI usage over the last 10 years.
- Widespread UPI adoption was highlighted, noting even rickshaw drivers and informal workers are using digital payments.
- India's digital empowerment has transformed the daily lives of millions at the grassroots level.
- India is now moving swiftly toward an AI revolution with a focus on human-centric applications.
- Prime Minister Modi has articulated a vision where India leads and guides the global community in responsible, inclusive AI development.
Prime Minister Narendra Modi inaugurates AI Impact Summit 2026, Bharat Mandapam
The India AI Impact Summit 2026 opened with a sweeping vision of India's historical and modern leadership in systems thinking, technology, and inclusive progress. The summit highlighted India's advancements in digital public infrastructure, the importance of democratizing AI for the masses, and the strategic integration of AI across industries. Key announcements included Tata Group's first large-scale AI-optimized data center in partnership with OpenAI and AMD, focus on domain-specific semiconductor chips, and Tata's AI powered platforms tailored for Indian contexts. Anthropic, through CEO Dario Amodei, reaffirmed its commitment to India by opening an office, partnering with major Indian enterprises and nonprofits, and joining global and national collaborations on AI safety and economic impact. The summit underscored India's central role in shaping ethical, scalable AI solutions both for its populace and for the broader Global South, while balancing opportunities in health, education, and poverty alleviation with proactive management of economic and security risks.
- The summit welcomed delegates from 18 countries, making it the largest to date.
- India’s approach emphasizes inclusive digital public infrastructure and deploying AI at scale for the benefit of the entire population.
- AI tools are being made accessible to rural populations—an example cited was 1,500 rural women learning to use AI within four hours.
- Tata Group announced the establishment of India's first large-scale AI-optimized data center with OpenAI, starting at 100 megawatts capacity, with plans to scale to 1 gigawatt.
- Tata Group partnered with AMD to combine advanced AI architectures with Tata’s infrastructure, aiming to meet global standards for sustainable, high-density AI compute.
- Tata is building AI data insights platforms based on diverse Indian datasets and an AI operating system for industry-wide agentic solutions.
- Planned development of domain-specific AI-optimized chips, beginning with the automotive sector.
- Anthropic opened its India office, hired a local managing director, and formed partnerships with major Indian companies (including Infosys) and nonprofits in education, agriculture, and health.
- Anthropic committed to expanding AI’s benefits across the Global South, benchmarking models on India’s regional languages, and collaborating on AI safety and economic impact evaluations.
- India is positioned as a global leader in renewable energy for powering AI infrastructure.
- Summit reinforced India's role as a partner in managing the societal and economic impacts of advanced AI, joining the New Delhi Frontier AI impact commitments.
📅 Sessions from 2026-02-20
Hornbill Comes to India AI Impact Summit 2026 @TaFMANagaland
The provided transcript appears to be an unstructured collection of singing, chanting, and musical interludes rather than a formal session or panel discussion from the India AI Impact Summit 2026. There is no evidence of specific announcements, numbers, or policy statements relating to AI or related fields. The session seems more like a musical performance or celebratory gathering, with no discernible content for analytical summary in the context of AI impact or summit outcomes.
- Session consisted primarily of musical performances, repeated chanting, and fragmented lyrics.
- No clear policy announcements, statistical data, or formal discussions on AI or technology.
- No references to AI initiatives, government action, or industry-led developments.
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5
The India AI Impact Summit 2026 brought together major global AI players, startups, and policymakers in a landmark event marked by active collaboration, unprecedented youth engagement, and significant investment commitments. Key highlights include the broad acceptance of Prime Minister Narendra Modi's 'Manov AI' vision (AI for the people, by the people, of the people), record-breaking student participation, over $250 billion pledged toward AI infrastructure, $20 billion in VC deep tech investments, and an expanding array of foundational AI models developed with frugal resources. The summit saw strong consensus on ethical, responsible AI use and focused on driving inclusivity, grassroots impact, and setting the stage for 'AI Mission 2.0.' More than 70 countries have already signed the summit declaration, expected to rise to over 80. India announced further steps to strengthen its AI ecosystem, such as launching new semiconductor facilities and reaffirming commitment to ensure AI benefits reach the last person in society. Real action and implementations have already begun around the agreed guidelines and initiatives, with India positioning itself as a leader in AI for inclusive national development by 2047.
- Participation from all major global AI companies and substantial startup involvement.
- Youth engagement record: over 250,000 students participated, setting a Guinness World Record.
- Investment pledges: over $250 billion for AI infrastructure, $20 billion for VC-backed deep tech.
- 'Manov AI' vision (AI for humanity) widely accepted by international delegates.
- 70+ countries have already signed the summit declaration, aiming for 80+ signatories.
- AI Mission 1.0 exceeded its goals: 38,000 GPUs deployed (goal was 10,000), 12 foundational/multimodal models developed (goal was 2), 12 AI Safety Institutes (goal was 1).
- Summit extended by one day to accommodate global dialogue and maximize consensus.
- Major upcoming milestones announced: new semiconductor plant in Uttar Pradesh and commencement of commercial production at Micron's large facility.
- Commitment to inclusivity: focus on last-mile diffusion so AI benefits reach every citizen.
- Strong international consensus around responsible, ethical guardrails, with groundwork laid for actionable guidelines.
- AI Mission 2.0 announced, promising greater goals and broader implementation.
Powering AI | Global Leaders Session | AI Impact Summit India Part 2
The session highlighted a shift in AI research and infrastructure priorities from simply scaling up model size to focusing on adaptability, data ownership, and efficiency—both technologically and in terms of energy use. Speakers emphasized that the current transformer-based AI architectures have inherent limitations, making adaptability, flexible interfaces, and real-time model learning critical research frontiers. On the infrastructure side, the panel underlined the urgent challenge of powering the fast-growing demand of AI data centers, noting their immense electricity consumption and dependence on reliable renewable power. The International Solar Alliance announced the global 'AI for Energy' mission to leverage AI in accelerating decentralized renewables and discussed the reciprocal challenge of meeting AI's energy needs, especially in developing regions. The interplay of AI and energy innovation, regulatory frameworks, workforce upskilling, financing, and robust, resilient digital infrastructure were identified as central priorities for both India's and the world’s responsible AI-driven growth.
- Transformer-based AI architectures have reached a ceiling; future progress lies in adaptive data, intelligence, and interfaces.
- Focus in AI innovation is shifting from model scaling to adaptability, data ownership, and efficiency.
- Energy constraints, not algorithms or chips, are set to become the primary bottleneck for AI deployment, especially for data centers.
- AI for Energy: The International Solar Alliance (ISA) launched a global AI mission for energy (AI for Energy) at the summit.
- Decentralized solar adoption in India remains low (15-20%) compared to global averages (40%), but AI can help accelerate integration.
- Training and workforce development at the intersection of AI and energy is a new ISA priority, with the creation of the ISAI Academy.
- Data center electricity consumption is growing rapidly; global data center demand could double every three years and already equals the grid of Spain.
- Notable outages at major tech companies (Meta, Google Cloud, AWS, Azure, TikTok) illustrate the fragility of current infrastructure with respect to energy supply.
- The session underscored the need for innovation in data center cooling, 24x7 renewable integration, and regulatory/digital interoperability.
- Concentration of AI compute (via GPU hunger) centralizes power; improving efficiency and adaptability could democratize AI development.
- India and other developing countries are expected to see a rapid increase in data center energy demand, highlighting the need for sustainable solutions.
- Ludon County, Virginia, USA is noted as a data center capital, with a peak draw of nearly 3 GW—enough to power a small country.
Panel Discussion: AI in Digital Public Infrastructure (DPI) | India AI Impact Summit
The session at the India AI Impact Summit 2026 brought together leaders from government, multilateral agencies, consulting, and the private sector to discuss the intersection of Artificial Intelligence (AI) and Digital Public Infrastructure (DPI). Panelists emphasized that India’s DPI journey serves as a global benchmark for scalability, inclusion, and innovation, and that AI can significantly accelerate improvements in citizen service delivery when built atop mature DPI foundations. However, strong safeguards, inclusion measures, and a focus on integrity and sovereignty are required to avoid exacerbating digital divides or perpetuating biases. The UNDP and World Bank advocated for early embedding of safeguards and user-centric design to maximize AI’s benefits while mitigating risks, and highlighted the need for open, interoperable data ecosystems to foster innovation. The experience of India in leveraging population-scale open infrastructure, such as Aadhaar and UPI, has spurred private sector innovation and offers valuable lessons for applying AI to address broader societal goals such as education, sustainability, and MSME support.
- India's DPI models are considered among the best globally and are a source of national pride and international benchmarking.
- Governments should focus on four pillars for AI in DPI: inclusion, integrity, safeguards (including bias detection and explainability), and sovereignty.
- AI can transform citizen service delivery by orchestrating APIs at scale, enabling personalized and multimodal government interactions.
- Population-scale DPI requires early and strong safeguards to avoid reproducing societal divides and to ensure inclusion from the outset.
- UNDP has developed a ‘universal DPI safeguards framework’ being implemented across multiple countries and stresses early involvement of citizens in design.
- AI’s rapid scaling amplifies both positive impacts and potential harms, making early bias detection and consent mechanisms critical.
- World Bank emphasized DPI’s role in breaking down data and organizational silos, enabling more interoperable, user-centric approaches for the AI era.
- Open government data and cross-sector innovation, as seen in India's 120 unicorns, can be accelerated by combining DPI with AI, particularly when public data access is improved.
- Significant VC funding in India is concentrated in fintech and e-commerce, but there are growth opportunities in underfunded categories like sustainability and education.
Powering AI | Global Leaders Session | AI Impact Summit India
The session at the India AI Impact Summit 2026 featured OpenAI's Chief Global Affairs Officer addressing the transformational potential of artificial intelligence in India and globally. Key themes included bridging the 'capability gap' between power users and non-users of AI, ensuring equitable access and AI literacy, and fostering agency so individuals can maximize value from AI tools. The speaker highlighted that education is crucial for democratizing AI, calling for updated pedagogical approaches fit for the intelligence age. Citing India’s unparalleled uptake—100 million regular users and rapid platform growth—the session emphasized India’s pivotal, potentially decisive role in setting global norms on whether AI develops along democratic or autocratic lines. Drawing a historical parallel with the printing press, the discussion underscored how access and openness spur societal progress, positioning India not just as a major market but as a key strategic partner in OpenAI’s mission to deliver AI benefits worldwide.
- Vahan.ai highlighted as a company connecting talent to jobs using AI, helping bridge the employability gap.
- OpenAI platform in India is the largest, serving 100 million regular users—about a third of whom are students; global user base is 800 million.
- Power users of AI tools generate a 7x economic impact versus non-users, emphasizing the need to close the 'capability gap.'
- Access to AI tools in India is prioritized, with free options and a premium subscription at $3.99/month, aiming for affordability and inclusion.
- AI literacy is a central educational goal, with a call for widespread adoption and usage in daily life to maximize individual and societal benefits.
- Agency—proactive and skilled use of AI tools—is identified as the primary challenge and enabler for genuine democratization of AI.
- Historical analogy drawn with the printing press, arguing that openness and decentralization drove Europe’s progress, while centralization stifled innovation in China.
- India's approach to AI—whether democratic or autocratic—could set a global precedent, given its scale and status as the world’s largest democracy.
- OpenAI regards India as a strategic partner, not just a market, aligning with its mission to create AI for the benefit of all humanity.
Building Trusted AI at Scale: Cities, Startups & Digital Sovereignty | India AI Impact Summit 2026
The session featured keynote speeches from Ju Patil, CEO of Cisco, and Cristiano Ammon, CEO of Qualcomm, at the India AI Impact Summit 2026. Patil addressed the audience of 250,000 attendees and congratulated India's leadership in convening the global AI community. He outlined AI's accelerated evolution—from chatbots to autonomous agents and soon to physical AI—emphasizing a paradigm shift: AI systems must become integrated partners in work processes, not mere productivity tools. Patil highlighted three core constraints to AI progress: scalable, secure infrastructure; closing the 'context gap' by augmenting models with enterprise and machine data; and overcoming trust deficits by embedding dynamic security and governance at runtime. He shared Cisco’s advances in AI-developed software (noting their first 100% AI-coded product), innovations in network and observability infrastructure, and collaboration with India. Patil praised India’s unique advantages—a vast digital infrastructure, young talent pool, and nationwide scalability—as central to advancing global AI. He concluded with a call for collective responsibility to develop and deploy AI safely to solve pressing societal challenges such as healthcare, poverty, and education. Following Patil, Cristiano Ammon was introduced as Qualcomm's CEO, set to present on the revolution of AI at the edge, bringing AI from the cloud to billions of local devices globally.
- India AI Impact Summit 2026 attracted 250,000 attendees, one of the largest AI gatherings ever.
- AI has rapidly progressed from chatbots to autonomous agents, with physical AI as the next milestone.
- Cisco launched its first product fully developed and coded by AI without human-written code.
- AI development faces three key constraints: infrastructure (power, compute, network), the 'context gap,' and trust/safety deficits.
- Infrastructure requirements are shifting from spiky, burst compute to constant, high-throughput needs as AI agents work continuously.
- Context enrichment for AI is vital: future models require access to proprietary enterprise data and vast amounts of machine-generated data, not just publicly available internet data.
- Over 55% of new data growth will be machine-generated, crucial for next-gen AI systems.
- Trust in AI demands two pillars: protecting agents from external threats (e.g., jailbreaks, data poisoning) and shielding the world from rogue AI via runtime guardrails and dynamic governance.
- Cisco is providing end-to-end AI solutions: secure, scalable networks, context-rich environments, and deep observability across hardware and software layers.
- India’s strengths—youthful workforce, universal digital identity (Aadhaar), robust payment systems (UPI), and large-scale digital infrastructure—position the country as a global leader in AI deployment and innovation.
- The summit underscored a vision where humans confidently and securely delegate tasks to AI, with the societal goal of solving humanity's toughest challenges.
- Cristiano Ammon of Qualcomm was set to present next on enabling AI at the edge, making intelligent processing ubiquitous beyond the cloud.
Keynote by Dr. Pramod Varma | Co-founder & Chief Architect, NFH | India AI Impact Summit
In his keynote address at the India AI Impact Summit 2026, the speaker highlighted India's unique advantages in democratizing and scaling AI due to its decade-long investments in digital public infrastructure (DPI). He noted that India has transitioned over a billion people into the formal economy by providing digital identities, bank accounts, and payment platforms like Aadhaar and UPI. These foundational APIs and machine-readable, cryptographically signed data trails have fostered an environment primed for AI innovation. Emphasizing the significance of data ownership and programmability, he predicted that countries blending AI with DPI will far outperform others in the coming decade. Finally, he lauded India's vibrant, risk-taking entrepreneurial culture and forecasted the startup ecosystem reaching one million startups by 2035, underscoring the importance of continued bold experimentation to address India's diverse challenges.
- India has successfully formalized over a billion people by providing digital identities, bank accounts, and payment capabilities.
- Key digital infrastructure components (Aadhaar, DigiLocker, UPI, GST, Fastag) generate verifiable, machine-readable data and are API-based, facilitating programmability and innovation.
- Digital Public Infrastructure (DPI) and the Data Protection & Privacy Act (DPDP) have made individuals the owners of their own data.
- India's approach has resulted in democratization and inclusion, going beyond elite access to reach entrepreneurs, students, and ordinary citizens.
- The speaker predicts that, in 10 years, countries investing in DPI combined with AI will achieve 10x-50x better economic outcomes than those without such a foundation.
- India’s startup ecosystem has expanded from 1,000 companies in 2016 to 100,000 today, with an expectation to reach 1 million startups by 2035.
- Integration of AI with DPI is positioned as the path to exponential, combinatorial innovation power.
- The session sets the stage for a panel discussion on the integration of AI and DPI, exploring opportunities, risks, and strategies for secure and scalable deployment.
AI Innovation in India
This session at the India AI Impact Summit 2026 spotlighted exceptional young innovation champions who are leveraging AI to solve pressing societal challenges, demonstrating India's capacity to nurture grassroots talent. Three young innovators shared the journeys behind their AI-powered startups: Delta AI Revolution's mental health platform, Charades' sign language translation glove, and advanced multi-modal AI for healthcare diagnostics. The session also commemorated the 10th anniversary of the Atal Innovation Mission (AIM), affirmed by leadership from Intel and AIM as a transformative force scaling India's innovation ecosystem. The unveiling of the Tinkerreneur Compendium, featuring the top 50 student innovators selected and mentored by AIM and Intel, underscored the summit’s commitment to cultivating the next generation of AI leaders. Summit speakers reinforced India’s unique advantage with its vast, adaptable population and its global responsibility to anchor AI in human-centric values. The event celebrated India's emergence as a leader in AI-driven social impact, particularly for the Global South, while emphasizing the need for sustainable, inclusive technological growth.
- Three young innovators presented impactful AI startups: Delta AI Revolution (AI-driven mental health platform), Charades (sign-language glove translating gestures to speech/braille), and Newex CIS (multi-modal AI for radiology and dermatology diagnostics).
- Delta AI Revolution addresses India's 1:100,000 psychiatrist-to-population gap, partnering with psychiatrist clinics and planning B2C expansion.
- Charades developed a deep-learning-powered glove to aid the deaf-blind, successfully trained on thousands of images through national mentorship programs.
- Newex CIS employs retrieval-augmented vision-language models for real-time medical diagnostics, focusing on robustness to distribution shifts and multimodal reasoning.
- The Atal Innovation Mission (AIM) marked its 10th anniversary, now the world’s largest grassroots innovation initiative.
- Intel and the Ministry of Electronics & IT were recognized for ongoing funding, support, and mentorship of young innovators.
- The Tinkerreneur Compendium was unveiled, showcasing the top 50 AI student innovators selected and trained by AIM and Intel.
- Leadership highlighted India’s demographic advantage (projected population: 1.6 billion by 2060) and resilience in unstructured environments as key strengths in AI adoption.
- The summit was celebrated as the first in the Global South with a distinctly human-centric framing of AI, calling for responsible innovation.
- AI was positioned as a societal multiplier with India expected to be its primary global benefactor and a driving force for transformative economic ascent.
Panel Discussion: Next Generation of Techies | India AI Impact Summit
The panel discussion titled 'Next Generation of Techis,' moderated by Anirudh Suri of India Internet Fund, brought together a diverse group of AI entrepreneurs and policy experts to explore how the trajectory of technology-driven entrepreneurship is evolving in the AI era. Panelists included Malhar, a young AI-driven biotech startup founder; Navina, founder of Credo Aai and AI governance advisor; and Arvind, CEO of the enterprise AI company Glean. Major themes across the discussion included the increasingly lean structure of startups enabled by AI automation, the democratization of cross-disciplinary research and knowledge, and the vital importance of integrating AI policy and governance thinking into entrepreneurial endeavors. Panelists highlighted both similarities and marked differences between previous technology waves (consumer internet, mobile, social) and the current AI wave—most notably, the fundamental rethinking of organizational structure and talent requirements, the critical nature of research and regulatory compliance in emerging fields, and the unprecedented agility that AI tools afford startups. The discussion underscored that in today’s climate, a deep understanding of AI, effective application of research, and a proactive approach to AI policy and regulation are essential for new tech founders aiming for scalable and responsible innovation.
- Panel highlighted the leaner nature of AI startups—founders can build minimum viable products and achieve significant milestones with smaller teams and lower initial capital requirements due to AI automation.
- Malhar's startup Origin Bio operates with a five-person team (only one with a biology background), leveraging AI to conduct research, read scientific literature, and predict experimental results, demonstrating how AI lowers entry barriers and enables cross-disciplinary innovation.
- Navina, an AI governance expert, stressed the growing importance of AI policy and regulation even for tech entrepreneurs, having advised the White House and global governments on AI guardrails.
- Arvind, CEO of Glean, noted a fundamental shift in organizational design—traditional blueprints for building teams are becoming obsolete as AI transforms roles, workflows, and internal knowledge management.
- Panelists emphasized the need for founders to focus on solving real business problems using AI, not just riding technology trends, and to remain research-driven in sectors like biotech and enterprise AI.
- Discussion underscored that reliability, compliance (e.g., FDA for biotech), and robust product development have become critical differentiators as going from zero to one is now much easier in AI.
AI for Safer Workplaces & Smarter Industries: Transforming Risk into Real-Time Intelligence
The session highlighted Benchmark Gen Suite's three-decade journey in digitizing Environment, Health, and Safety (EHS) management, with a recent strategic pivot towards an AI-first approach across its global SaaS platform. The speakers detailed a paradigm shift from manual, retrospective safety reporting toward real-time, AI-enabled experiential engagement for workplace safety, compliance, and risk management. Multiple live demonstrations showcased the use of AI agents—such as Jenny AI and Ergo AI—that leverage image and voice recognition, natural language processing, and large language models to automate hazard reporting, root cause analysis (including 5-Why), ergonomic risk assessment, and legal compliance deconstruction. These AI-powered tools democratize workplace safety and compliance, enabling workers at all skill levels and languages to participate in safety culture, bridging expertise gaps, and facilitating proactive risk mitigation. The company currently boasts 75+ AI use cases in production, with ambitions to advance agentic autonomy for operational actions, all while maintaining strong guardrails for human oversight.
- Benchmark Gen Suite has shifted to an AI-first strategy over the past three years, transforming its SaaS EHS platform.
- Their platform supports 450 global subscribers with 8 million users for compliance assurance, EHS, sustainability, and ESG management.
- 75+ AI use cases have been developed to date with ongoing work to expand practical applications.
- Jenny AI leverages image and voice input, auto-populating hazard and incident reporting forms and supporting multiple languages.
- AI agents facilitate root cause analysis (5-Why) and suggest corrective/preventive actions directly, even for non-experts.
- A new agent, Ergo AI, evaluates ergonomic risks from video input, supporting sites lacking certified ergonomists.
- AI-driven legal compliance deconstruction enables instant operationalization of complex regulatory documents.
- Demoed agentic shift: AI is moving from insight generation toward autonomous, actionable operations within defined human-governed guardrails.
- The tools significantly lower expertise, language, and accessibility barriers for safety participation and compliance in workplaces.
Scaling Innovation: Building a Robust AI Startup Ecosystem
The India AI Impact Summit 2026's Startup Felicitation Ceremony, organized under the Software Technology Parks of India (STPI) ecosystem, celebrated outstanding achievements of Indian startups across multiple verticals including revenue growth, funding, employment generation, women empowerment, innovation, and AI-led impact. Key startups were recognized for their exemplary contributions in revenue, funding, employment (including women), innovation, and sector-specific breakthroughs, especially in AI, healthcare, and food robotics. The event also featured passionate testimonials from founders, highlighting the crucial role of STPI in startup journeys—enabling access to funding, mentorship, industry connects, and global exposure. These stories underscored India's scalable and globally-relevant innovation ecosystem, spanning Tier 1 to Tier 3 cities. The session concluded with expressions of gratitude from dignitaries, a group photograph, and encouragement for the ongoing partnership between government, industry, and startups to build India's digital and innovation economy.
- STPI ecosystem startups were recognized for excellence in revenue, funding, employment, women participation, innovation, and AI-based impact.
- Phoenix Marine Exports & Solutions Pvt Ltd awarded for highest revenue (up to ₹25 crore) and impact in Tier 2 and 3 regions.
- Gemio Consulting Pvt Ltd felicitated for raising the highest funding in the up-to-₹25 crore revenue category.
- Swadha Agri Pvt Ltd recognized for generating the highest employment (up to ₹25 crore revenue bracket).
- Strangefi Technologies Pvt Ltd awarded for highest women employment (up to ₹25 crore revenue category).
- Suhora Technologies Pvt Ltd lauded for top revenue in the up-to-₹50 crore bracket.
- Nuwation Technology Solutions marked for highest funding in the up-to-₹50 crore revenue category.
- Tech IT Solutions Pvt Ltd received multi-category awards: highest employment, women employment, and AI-led impact (all up to ₹50 crore revenue).
- Atmik Bharat Industries and Mobile Pay E-Commerce recognized for maximum impact based on number of beneficiaries.
- Devnagari AI Pvt Ltd awarded for high AI-based impact on revenue (second position).
- Detrosell Healthcare & Research Pvt Ltd recognized as the most innovative startup for AI-powered healthcare diagnostics.
- EzyFi Solutions Pvt Ltd and Pyushle Innovations Pvt Ltd acknowledged as most promising innovation (with Pyushle at second position).
- Connector Foods Pvt Ltd recognized as a runner-up for most innovative startup.
- Testimonial highlights: Uless Innovations (drone tech for agriculture, defense, disaster management, serving 10,000+ farmers—presented solutions to PM Modi); Detrosell (AI diagnostics at the intersection of radiology and DNA sequencing, scaling globally); SecuraTech (simplifying cybersecurity for Indian enterprises); Kenboard Solutions (AI-powered food robotics platform creating direct market linkages for farmers); EzyFi Solutions (AI-driven oncology treatment planning, detected 4000+ TB cases through 1M scans, drew interest from PM Modi and Bill Gates/Microsoft).
- Strong emphasis on STPI’s role in mentoring, funding, global connect, and platform exposure for Indian startups.
- Dignitaries from STPI, National Productivity Council, TiE Delhi NCR, and industry joined for awards, mementos, and closing acknowledgments.
- Repeated calls for group photographs and celebration of all recognized startups as pillars of India's digital innovation journey.
City-Scale Innovation: How Delhi’s Universities are Driving Public AI Impact
The session showcased advanced AI-driven education and enterprise solutions developed by two companies at the India AI Impact Summit 2026. The first demo, centered on the 'Leoi' platform, displayed an adaptive, multilingual, and inclusive AI-powered learning and assessment system for K-12 students, educators, and skill training, highlighting features such as auto-generated assessments, progress reports, personalized learning modes, teacher-centric lesson planning, and seamless multimedia integration for content delivery. The platform extends beyond academic content to areas like yoga training, demonstrating its versatility for lifelong learning and upskilling. In the enterprise context, 'Buch' demonstrated its commitment to democratizing AI for India's SMEs by introducing a modular, low-cost, and accessible architecture that empowers small businesses to automate repetitive tasks, improve accuracy, and multiply productivity without requiring deep technical expertise. Their solutions include multilingual voice assistants, multimodal chatbots, and 'push and talk' agents that leverage grounded company data, enable 24/7 lead management, and offer model-agnostic integration with various LLM providers. Across both education and business verticals, the session emphasized making sophisticated AI practical, grounded, and beneficial for previously underserved audiences.
- Leoi platform offers AI-powered, multilingual educational tools for K-12 and skill development, supporting students and teachers with adaptive learning modes and real-time assessments.
- Comprehensive assessment generation includes MCQs, true/false, short/long answers, and fill-in-the-blank questions, tailored to student profiles and learning analytics.
- Teacher features include automated lesson planning, custom assignment creation with marking guides, and quick concept revision or teaching aids—all aligned to national curricula.
- Expansion into skill development (e.g., yoga) showcased the platform’s ability to combine video, multimedia content, and conversational learning for practical skills acquisition.
- Buch focuses on democratizing AI for India's small and medium enterprises with affordable, user-friendly solutions that require minimal tech skill.
- SME AI solutions are built as plug-and-play modular agents—multilingual voice assistants, multimodal chatbots, and a ‘push and talk’ agent for WhatsApp-style voice workflows.
- Innovative use of a 'welcome LLM' minimizes latency and ensures responsive user experiences on low-cost infrastructure.
- Solutions are model-agnostic, configurable, and can be rapidly adapted to different industries and company-specific knowledge bases, keeping costs low and adoption high.
- Key benefit: human oversight focuses on just 5% of workflows, while AI automates the remaining 95%, achieving up to 20x productivity improvement and cost reduction.
- Strong emphasis on grounded data, empathetic AI interactions, and addressing real operational needs (lead management, customer service, information access, etc.).
AI Without the Cost: Rethinking Intelligence for a Constrained World
The session at the India AI Impact Summit 2026 delved deeply into the escalating costs and infrastructural bottlenecks facing AI development, particularly with regard to the overreliance on GPU-based architectures. The panel, which included leaders from STEM Practice Company (an Oracle partner), Tata Group, Oracle, Rice University, Meta, and Genloop, highlighted an urgent need to return to foundational mathematical optimizations and smarter software engineering practices. They argued that with proven techniques, many AI workloads can be shifted from expensive, high-power GPUs to more accessible CPUs, edge devices, and even mobile platforms, driving down cost and energy consumption. Specifically, the session emphasized that the exponential increase in AI model complexity and parameter counts is outpacing hardware growth, deepening the gap between computation supply and demand. Techniques such as dynamic sparsity, block sparsity, and innovations in context window handling were identified as key areas for breaking through current bottlenecks and enabling the next generation of complex AI solutions, especially in the Indian context. Concrete case studies, such as achieving 100% AI accuracy without GPU usage for Tata, demonstrated the practical viability and impact of these approaches.
- Current AI infrastructure is dominated by costly, high-heat, failure-prone GPUs, leading to unsustainable power and financial demands.
- STEM Practice Company, an Oracle partner, demonstrated 100% AI accuracy for a Tata Group use-case without using GPUs.
- There is a widespread neglect of traditional software and mathematical optimizations when deploying AI models, missing huge opportunities for infrastructure cost savings.
- The growth in AI model parameter counts (such as LLMs) is vastly outpacing the increases in GPU memory and compute capabilities, a gap demonstrated with recent research and visualized on logarithmic plots.
- Novel techniques like dynamic sparsity (selectively computing only what's needed for an input) and mixture of experts have become mainstream to improve computational efficiency—but these too are beginning to plateau.
- The next key battleground for AI advancement is increasing the context window of LLMs (AI's working memory), which is essential for handling more complex, multi-step tasks, but this improvement is stalling experimentally.
- Panels stressed the importance of revitalizing decades-old math and optimization methods—now commercially viable thanks to market demand and AI scale.
- The India context is especially crucial: enabling AI on CPUs, edge, and mobile devices unlocks accessibility and cost advantages vital for local industry and innovation.
Keynote: ‘I’ to the Power of AI | An 8-Year-Old on Aspiring India Impacting the World
The session emphasized India's commitment to AI sovereignty, inclusivity, and impact, positioning itself as a global leader in the responsible and democratized development of artificial intelligence. Highlighting comparisons with global AI strategies—US innovation, Chinese centralized control, European regulatory frameworks, and Middle Eastern infrastructure—the speaker framed India's approach as uniquely focused on digital independence and broad-based participation. With over 7,500 datasets and 273 AI models deployed under the India AI mission, AI compute costs have been significantly reduced (less than 2 cents per minute), enabling startups, researchers, and diverse communities to build solutions across sectors. A personal use case demonstrated how the India AI sovereign model empowers cultural and linguistic inclusion, allowing impactful contributions by even the youngest generation. The session concluded with a call for global collaboration and multilateral cooperation through the upcoming GPAI council, ensuring that AI development remains human-centric and inclusive.
- India prioritizes digital sovereignty, inclusivity, and broad societal impact in its AI strategy.
- 7,500 datasets and 273 AI models have been deployed as public resources under the India AI mission.
- AI compute power under the India AI mission costs less than 2 cents per minute, making AI highly affordable for startups and developers.
- India’s sovereign AI model supports translation into 22 Indian languages, facilitating both cultural inclusion and economic benefit (e.g., increased book sales and royalties).
- AI literacy is being promoted in alignment with the National Education Policy 2020, introducing AI from grade 3 onwards.
- India’s approach contrasts with the US (innovation-driven), China (centralized and government-driven), Europe (regulation- and trust-oriented), and Middle East (infrastructure-focused).
- GPAI council members are set to define multilateral cooperation on responsible and inclusive AI.
- The younger generation is actively contributing to and shaping India's AI landscape, not merely consuming it.
Panel Discussion: AI in Healthcare | India AI Impact Summit
The session at the India AI Impact Summit 2026 focused on the transformative potential of artificial intelligence in healthcare, especially within India and across both high-income (Switzerland) and low-/middle-income country (LMIC) contexts. The panel, featuring Dr. Sabin Kapasi, Chris (Managing Director, Anthropic), and Dr. Aditya (India relations advisor, Canton Rod Switzerland), discussed tangible near-term opportunities for AI in improving healthcare access, reducing administrative overhead, and accelerating drug discovery. Anthropic highlighted its commitment to safety-centric AI development and announced its new Bengaluru office, emphasizing the critical importance of deploying AI solutions that acknowledge local contexts and risks. Switzerland, bringing a research-driven but costly healthcare system, is now collaborating more closely with India through a $100B investment and expects to co-create a million direct jobs, including in healthcare. The panelists stressed integrating AI from inception in startups and called for collaboration between clinical experts, technologists, and policymakers to maximize value and safety in healthcare transformation, urging for adaptable infrastructure and trustworthy deployment of AI in clinical and administrative workflows.
- Anthropic announced the opening of its office in Bengaluru, India, to address on-the-ground AI opportunities in Indian healthcare.
- Switzerland and India have signed a free trade agreement committing to $100B investment in India across several sectors, including healthcare, aiming to create 1 million direct jobs in the next 15 years.
- Current healthcare challenges highlighted include: only 30% of US clinician time spent on patient care (rest on admin/paperwork); average primary care visit in India is just 2 minutes.
- Anthropic’s Claude LLM is designed to express uncertainty (e.g., 'I don’t know') in critical contexts to prioritize safety, which is considered essential in healthcare applications.
- Use cases discussed: reducing administrative burden (potential $1T impact in the US), accelerating drug discovery (customers have reduced tasks from 8 weeks to 8 hours), and increasing healthcare accessibility in LMICs.
- Swiss startups raised $2.5B last year, with many integrating AI from inception to attract investment and drive innovation.
- Switzerland’s healthcare system, though efficient and high-quality, is struggling with high costs, and AI is seen as key to process optimization and managing public dissatisfaction regarding premiums.
- There is a strategic focus on strengthening Indo-Swiss collaboration to facilitate tech transfer, startup growth, and cross-border healthcare innovation.
- AI safety, contextual adaptation, and transformative collaboration between non-clinical technologists and clinical practitioners were stressed as fundamental for successful AI-driven healthcare solutions.
Keynote by Sangita Reddy | Joint Managing Director, Apollo Hospitals | India AI Impact Summit
The session, delivered by a senior Apollo Hospitals executive at the India AI Impact Summit 2026, spotlighted the transformative role of AI and digital health in democratizing healthcare across India. Drawing from Apollo’s extensive experience, the speaker outlined how AI-driven solutions are improving clinical decision-making, disease prediction, and operational efficiencies while fostering preventive and personalized care. Key initiatives include Apollo’s digital health platforms (Apollo 247 and Apollo Assist) now serving nearly a million daily users, integration of AI in clinical workflows, remote and rural health outreach, and partnerships for validation and scale-up of AI tools. Emphasis was placed on the EASE framework for ethical AI adoption in healthcare, data-enabled risk stratification for non-communicable diseases, and collaborative innovation across public and private sectors. The session concluded with a call to reimagine healthcare systems as interconnected, data-driven, and place-agnostic, aiming for a future where high-quality care is accessible regardless of geography.
- Apollo’s digital health platforms now serve 45 million users, with around 1 million daily interactions.
- Over 3.5 million API calls are made on Apollo’s AI platforms, which operate across five core areas: clinical intelligence, disease prediction, signal/image analysis, acute care early warning systems, and operational optimization.
- Clinical AI tools process more than 20 million health records to enhance decision-making for doctors.
- Apollo’s AI-driven sepsis prediction tool is connected to ~2,000 critical care beds, with ambitions to expand coverage to 100,000 ICU beds nationwide.
- AI-powered throughput optimization is reducing doctor burnout by saving 1–1.5 hours daily in record management and patient billing.
- 19 AI solutions have received or are receiving MDSAP approval; 9 are FDA-approved, showing compliance with global standards.
- Strong emphasis on the EASE framework (Ethics, Adoption, Suitability, Explainability) for responsible, transparent healthcare AI.
- AI-powered preventive care includes risk profiling, biometric screening, and early disease detection, including embedded AI in ultrasound for NAFLD, which affects 40% of adult Indians.
- Partnerships with Google and 3M have enabled AI algorithms for TB and pre-diabetes prediction, applied to over 450,000 people.
- Remote care is being expanded through mobile vans, teleophthalmology, and NCD screening in rural India.
- Call to close the gap between pilots and scalable validation of AI innovations in healthcare.
- Vision for the ‘health system of the future’: integrating primary, preventive, curative, home, and advanced care using data-driven and AI-enabled technologies spanning public and private sectors.
Fireside Chat: Intel, Tata Electronics, CDAC & Asia Group | India AI Impact Summit
The session at the India AI Impact Summit 2026 provided a deep dive into the practical challenges and groundbreaking progress around India's AI stack, highlighting how government-driven compute infrastructure and enterprise needs are converging. Key policy announcements were showcased, including substantial investments by Microsoft ($20B for India, $50B in the Global South) and Google ($15B), as well as strategic partnerships such as Anthropic with Infosys and Tata collaborating with OpenAI. Vivek Kania from C-DAC discussed the evolution of India's supercomputing capacity, now scaling from 48 petaflops to 100 petaflops by the end of 2026, powering critical research and enabling thousands of researchers and startups. On the industry side, Intel India's Nathan Baj outlined the hurdles Indian enterprises face in scaling AI, namely uncertainty over deployment models (cloud, on-premise, edge), ROI concerns, lack of MLOps expertise, and the complexity of making hardware and software choices. Both panelists explored the intersection of sovereignty and infrastructure: while India is moving toward increasing control over AI software and deployment layers, it remains dependent on global silicon, though local R&D into RISC-V based GPGPUs is underway. Data sovereignty is becoming a critical driver in certain sectors (finance, healthcare) but less so in others, with enterprises weighing it against speed, performance, and cost considerations. The discussion emphasized that deployment at true scale is just beginning, requiring smarter choices, greater MLOps capabilities, and a pragmatic approach to sovereignty.
- Microsoft announced $20B in AI infrastructure investment for India, and $50B for the Global South.
- Google's India AI investment has escalated to $15B.
- Anthropic announced a high-profile AI partnership with Infosys; Tata is collaborating with OpenAI.
- C-DAC's Param supercomputer series offers 48 petaflops (to expand to 100 by year-end 2026) and supports 15,000 researchers plus MSMEs and startups across key scientific domains.
- The National Supercomputing Mission will operate 60 installations on India's National Knowledge Network (NKN).
- Common workloads on government AI compute infrastructure include drug discovery, climate modeling, bioinformatics, and oil exploration.
- Enterprises face hurdles moving from AI pilots to scaled production due to ROI uncertainties, deployment model complexity (on-prem vs cloud vs edge), and maturity of MLOps talent.
- Indian AI deployments are evolving from LLMs to more focused SLMs, emphasizing frugal AI and practical ROI.
- India’s sovereign control is pragmatic: global silicon is still required, but everything above the chip layer (models, orchestration, applications) is increasingly localized.
- C-DAC is developing a RISC-V based sovereign GPGPU targeted for 2029-30.
- Policy and data sovereignty considerations significantly impact AI adoption in regulated sectors but are less determinative for other industries.
- Enterprises are growing in sophistication but most production-grade deployments are still emerging.
From Innovation to Impact: Bringing AI to the Public
The session at the India AI Impact Summit 2026 featured a visionary discussion on India’s unique opportunity to leverage AI for robust economic growth, social inclusion, and the creation of indigenous technological breakthroughs. The speaker emphasized that AI should not be perceived as a threat to traditional jobs but as an accelerator for productivity, economic expansion, and the transformation of India into a global AI leader. The panel advocated for India to invest in building its own large language models (LLMs) and AI foundation models, rooted in Indian culture, knowledge systems, and languages, to overcome international biases and better serve domestic needs. The conversation further explored the practical importance of building vertical-specific AI models—especially in finance, agriculture, and healthcare—to drive productivity, democratize access to high-quality financial and health advice, and remove systemic biases, particularly in critical sectors like credit access. This inclusive approach aims to empower the broad population, from rural shopkeepers to urban auto drivers, ensuring AI-driven economic and social benefits. The panel dispelled concerns about the size of investment needed for foundation models, highlighting India's demonstrated will to invest in digital infrastructure and the need for local innovation over mere replication. Ultimately, the session painted AI as a transformative engine—akin to the internal combustion engine’s industrial impact—enabling the creation of tailored solutions for India's unique context.
- AI is viewed not as a job-reducing force but as a driver for productivity and economic growth, with the potential to propel India’s GDP and create globally dominant businesses.
- India has the opportunity to compound its $2 trillion economy within the next 7–10 years, with AI playing a pivotal role.
- The need for indigenous AI foundation models is pronounced—not for prestige, but to encapsulate local knowledge, languages, and cultural nuances that global models miss.
- Recent launch of an Indian foundation model (Serb) was applauded, with a call for many more indigenous efforts to ensure plurality and global credibility.
- Vertical-specific LLMs are essential for solving real problems in sectors like financial services (risk/fraud, inclusion), agriculture (crop/farm management), and healthcare (personalized advice).
- Building AI models in India is not merely about massive capital investment ($1bn+), but about smart innovation, resource allocation, and leveraging India’s tech talent, as shown by prior investments in digital payment infrastructure.
- AI enables significant reduction of biases in financial decision-making (e.g., credit risk assessment), further democratizing access to financial services for underserved populations.
- AI-driven advisory tools can help everyday Indians—auto drivers, small savers—make informed decisions about savings, investments, and health in their native languages.
- The analogy was drawn between LLMs and internal combustion engines—universal technology 'engines' upon which India can build custom solutions for local needs.
- AI will facilitate broader inclusion, impacting not just banking and payments, but wealth management, insurance, healthcare, and education for the masses.
Nepal Engagement Session
The session highlighted how AI-driven language technologies are fundamentally transforming rural governance across India's 250,000-plus Gram Panchayats. Initiatives such as the e-gram SWARAJ portal, Bhashini language platform, and the AI-powered Sabhasar meeting summary tool have significantly enhanced transparency, efficiency, and inclusivity. By integrating tools that enable documentation and information sharing in local languages, rural citizens can now actively participate in local governance, track financial flows, and monitor local projects in real time. With over 115,115 Gram Sabha meetings already processed through Sabhasar and major states adopting this innovation, officials report improved meeting documentation, increased citizen engagement, and progress toward a participatory digital ecosystem that includes even remote and linguistically diverse communities. Challenges regarding training, connectivity, and dialect coverage remain, but active expansion into more languages and capacity building are underway. The Ministry of Panchayati Raj’s collaboration with tech providers and community stakeholders demonstrates a scalable, participatory model that is rapidly changing how rural India interacts with and trusts local governance structures.
- Over 2.5 lakh Gram Panchayats are onboarded and operate via the e-gram SWARAJ digital portal, covering everything from planning to payments.
- Bhashini, a multilingual AI platform, was integrated in 2023–2024 to allow Panchayat financials and records to be viewed in local languages by any citizen.
- Sabhasar, launched on 14 August 2025 by the Ministry of Panchayati Raj, uses AI-powered speech recognition (Bhashini ASR) to generate meeting minutes from audio/video, streamlining record-keeping and transparency.
- By 4 February 2026, 115,115 Gram Sabha meetings have been processed through Sabhasar, with adoption in states like Odisha, Tamil Nadu, and Tripura advancing to advanced tracking and agenda refinement.
- Drone surveys under the Swamitva scheme generate data on village land, which AI now processes to assess rooftop solar potential—integrated with PM Suryoday Yojana for targeted solar panel deployment in 2.38 lakh Panchayats.
- Language coverage gaps remain, but new efforts will expand Bhashini to 11 more Indian languages, enhancing accessibility for dialect-rich regions.
- AI solutions sidestep infrastructural constraints: Panchayats need only a mobile device to contribute records, with backend processing handled centrally.
- Capacity-building programs are being expanded to train Panchayat staff and citizens to leverage these digital tools.
- Structured, public documentation is driving greater citizen trust, accountability, and participatory governance down to the last mile.
- Cross-departmental adoption is underway, with agencies like the Department of Drinking Water piloting Bhashini-based documentation for Village Water Committees.
Educating for Viksit Bharat: Why Creativity, Cognition & Culture Matter
The session, 'Preparing Students for 2030,' at the India AI Impact Summit 2026 addressed the challenges and opportunities AI brings to education, employability, and human identity. Key panelists—spanning governance, academia, and administration—emphasized the enduring significance of creativity, cognition, and culture as foundational pillars distinguishing humans from machines. As AI redefines skill requirements and shortens the shelf-life of hard skills, panelists underscored the necessity of focusing on adaptability, critical thinking, and ethical application, rather than rote learning or technical skills alone. The panel acknowledged widespread anxiety about AI's impact, particularly among youth, but urged a shift toward embracing human originality, culture-rich learning, and practical application of intelligence. It was asserted that while AI can augment efficiency and productivity, the ultimate responsibility for decision-making, values, and adaptability remains uniquely human. The session ended by exploring AI's role in personalized education, especially for underserved populations, highlighting that the responsible integration of AI can expand opportunities and help address systemic educational limitations.
- Panelists agreed creativity, cognition, and culture are core human attributes that must be emphasized as AI proliferates.
- Coding is now considered a baseline skill; the ability to apply and contextualize technical knowledge is increasingly important.
- Rising use of AI in education is leading to concerns about job displacement, but panelists projected new types of employment will emerge.
- The 'shelf life' of hard skills is decreasing rapidly, now measured in a few years instead of decades.
- Students are being trained not just to use AI tools, but to think critically about prompting and applying AI outputs.
- AI is seen as a tool to enhance efficiency, but ethical use and decision-making remain in human hands.
- Panelists encouraged a focus on developing 'applied intelligence'—the ability to contextualize and solve real-world problems.
- There is widespread fear and uncertainty about AI's long-term impact, particularly among youth and professionals.
- The panel called for leveraging AI to address societal challenges, such as personalizing education in under-resourced communities.
- No concrete consensus or timeline was given for whether or when AI might supersede human intelligence, but the importance of resilience and adaptability was stressed.
Waves of infrastructure: Open Systems, Open Source, Open Cloud
The session at the India AI Impact Summit 2026 focused on the rapid evolution and scaling challenges of AI infrastructure, emphasizing the parallels between past hardware and software innovation cycles and today’s AI-driven compute explosion. The speaker highlighted their organization's recent launch and unique approach—building software-first, then hardware—for enterprise private cloud AI infrastructure, and referenced ongoing and upcoming partnerships, notably with UC San Diego for AI in health, research, and education. The address situated current trends within long-term technological shifts, underlining how AI differs from previous eras by promising to impact 95% of work and democratize programming to a population scale via natural language interfaces. The presentation tracked exponential increases in global capital, power, and compute demands, citing India’s data center growth from nearly zero to 10 gigawatts, and forecasted an unprecedented $2 trillion global infrastructure spend in the next decade. Technical discussions covered the critical impact of network (Ethernet evolution), memory hierarchies, shifting workloads toward inference and biological computation, and the need for open, heterogeneous infrastructure strategies. The session positioned the present era as a new inflection point, analogous to past transitions (workstations, cloud, open source), but with implications spanning economic growth, sovereign data, and national capability at population scales.
- Organization announced the recent launch of its enterprise private cloud AI infrastructure offering focused on the Indian market.
- Highlighted new and deepening partnerships with UC San Diego, including involvement in public-private AI initiatives and the newly created School of Computing, Information, and Digital Sciences.
- Pointed to India’s rapid data center buildout—from virtually zero to 10 gigawatts—compared to the US scaling from 25 to 125 gigawatts.
- Predicted global AI and cloud infrastructure spending will hit $2 trillion within 5-10 years, up from $300-500 billion today.
- Stressed that AI will impact 95% of all work, representing a much larger total addressable market (TAM) and transformative potential than the SaaS era.
- Framed the present as a historic inflection point, echoing past waves such as the rise of microprocessors, networked workstations, open systems, and cloud computing.
- Stated that every layer of abstraction in programming historically brings more people ‘to the party,’ with LLMs enabling programming via natural language for all—including non-programmers.
- Explored a software-first, hardware-later approach, projecting innovation in system and memory design to serve future AI workloads, with a bet on Ethernet and tiered/multi-modal memory topologies.
- Identified a growing gap in supply/demand for compute, power, and data sovereignty, especially in the context of in-country, population-scale AI (a major theme of the conference).
- Indicated plans for future public partner announcements, especially in health and biological computation verticals.
Keynote by Vivek Mahajan | CTO, Fujitsu | India AI Impact Summit
The session, led by a Fujitsu executive at the India AI Impact Summit 2026, emphasized the strategic importance of technological sovereignty for nations like India striving for leadership in artificial intelligence. Sovereignty was defined as a combination of secure data ownership, operational flexibility, and independence from third-party technology providers. Fujitsu outlined its commitment to delivering end-to-end AI infrastructure—spanning compute, network, and software—with major announcements including the imminent launch of the world's first 2nm ARM-based servers, a roadmap for world-leading quantum computing milestones, and an open, domain-specific software stack tailored to sectors with strict privacy needs such as defense and healthcare. The company's vision focuses on powering India's sovereign AI initiatives with high-performance, power-efficient hardware, open software, and secure, customizable AI platforms, including advances in agentic AI, robotics, and partnerships with major industry players. This comprehensive approach promises to give Indian firms genuine alternatives to U.S.-centric platforms, supporting the nation’s ambition to control its AI destiny across public and private sectors.
- Emphasis on technological and data sovereignty as foundational for India's AI leadership.
- Launch of Fujitsu's world's first 2-nanometer ARM-based servers, coming to market in two months, with a 1.4nm version to follow.
- Introduction of the 'Monaka' chip designed for AI and data center workloads, featuring power efficiency and confidential computing hardware.
- Roadmap for a 20-exaFLOP AI supercomputer to be operational within two years.
- Significant advances in quantum computing: 1,000-qubit machine launching next month in Japan, 10,000-qubit machine targeted within three years, and a 250-logical-qubit goal by 2030.
- All components offered via a completely open source, vendor-neutral software stack to avoid vendor lock-in.
- Networking breakthroughs: 1.6 Tbps optical switches enabling low-latency, long-range, and power-efficient data center connectivity (scaling to 3.2 Tbps in the future).
- Introduction of Takane large language model platform and Kosuchi agentic AI model for secure, domain-specific applications.
- Explicit target sectors include defense, government, healthcare, manufacturing, and finance, with dedicated solutions to keep sensitive data off public clouds.
- Robotics-focused 'physical AI OS' in development, aiming at persistent agentic intelligence for robotics, drones, and edge devices.
- Major international collaborations, including partnerships with AMD, Supermicro, Lockheed Martin, and others, to deliver total AI solutions.
Keynote by Naveen Tewari | Founder & CEO, inMobi | India AI Impact Summit
The keynote address highlighted the sweeping transformation AI is poised to bring to global commerce, touching on major themes including the democratization of intelligence, extended human lifespans, the narrowing of skill gaps, and an era of agentic commerce where deeply personalized and contextually aware AI agents revolutionize shopping, supply chains, and manufacturing. The speaker, representing InMobi and Glance, detailed their ambitious vision for 'agentic commerce,' a paradigm shift that replaces generalized experiences with hyper-personalized, AI-powered user journeys, aiming to train commerce models on a billion individual consumers. Architecturally, this involves multiple integrated AI models—knowledge graphs, generative experience engines, and individual user models—to deliver intelligent, transparent commercial decision-making at scale. The keynote positioned India as a leader in this new era, with Glance's innovations not just digitizing but fundamentally remaking commerce itself, promising massive market expansion (targeting a $3 trillion impact in India by 2047), a rebalancing of the global supply chain favoring smaller and local producers, and a renewed commitment to authenticity and transparency to combat prior distortions of the digital economy.
- AI will help extend human lifespan and eradicate diseases, potentially enabling people to live up to 120 years.
- AI will democratize skills—such as coding—narrowing inequality and elevating the technical capabilities of the broader population within five years.
- A new era of 'agentic commerce' is emerging, powered by AI agents that deliver deeply personalized shopping experiences and optimize every stage of commerce.
- Glance, InMobi's agentic commerce platform, has been globally launched, shifting from personalized feeds to individual-centric, real-time personal feeds.
- InMobi plans to train AI commerce models at individual scale, aiming to serve a billion people over the next several years.
- The technology integrates multiple AI models: a commerce intelligence graph (knowledge graph), generative AI experience layer for visual/personalized outputs, and individualized user models.
- Agentic commerce will create transparent, accountable, and trust-based shopping experiences by making recommendation engines interpretable and understandable for consumers.
- Proliferation of agentic commerce will generate significant efficiency and savings, feeding a powerful economic flywheel.
- Traditional marketplaces may decline, while individual and local brands rise, enabled by AI agents' ability to directly source from producers.
- Supply chains and manufacturing will become 'agentic', driven by consumer-level precision, improving productivity and creating a strong link between consumer data and production.
- Commerce accounts for 25% of both global and Indian GDP—agentic commerce could drive a $3 trillion economic impact in India by 2047.
- The speaker emphasized the need for authenticity, transparency, and accountability in the AI-powered digital economy to correct the distortions created by the previous wave of social media.
- India, historically late to prior technology waves, now has an opportunity to lead globally in AI-driven commerce platforms built from within the country.
Keynote: 2030 – The Rise of an AI Storytelling Civilization | India AI Impact Summit
The session presented a visionary outlook on the transformation of media, storytelling, and content creation catalyzed by AI advancements, particularly within India's unique context. The speaker traced the evolution from passive content consumption to an era of participatory, AI-generated narrative and immersive experiences. Key drivers outlined were collapsing creation costs, democratization of creative tools, multilingual platforms, and the emergence of autonomous creative engines that enable branching storylines, real-time personalization, and vastly accelerated production cycles. Emphasizing India's demographic energy, linguistic complexity, and cultural depth, the speaker argued that these strengths position India to lead a global 'storytelling civilization,' where billions of AI-assisted creators will emerge, and content ecosystems move from finite supply to an infinite stream. As business models evolve beyond advertising and subscriptions toward integrated commerce and community participation, India's entrepreneurial spirit and tradition of storytelling will uniquely enable it to shape the next digital cultural revolution by 2030.
- Streaming and passive consumption dominated media for the past 15-20 years, but true format innovation was limited.
- Short video and AI-driven content creation has driven the shift to a participatory 'creation era,' reducing production cycles from years to hours.
- AI models are enabling 'every creator to be a studio', with real-time translation and branching narratives bolstering inclusivity and engagement.
- Ecosystem shift: From one-to-many to many-to-many ('million-to-million') storytelling and feedback-driven content adaptation.
- By 2030, cameras will cede primacy to 'storytelling intelligence,' radically changing content production.
- Format innovation through micro-dramas—fast, direct, and personalized narratives not confined to traditional media lengths.
- Immersive, mixed reality and participatory storytelling experiences are emerging alongside eventized screenings.
- India's advantages: demographic scale, linguistic complexity (AI trained on nuanced language data), cultural depth, and a thriving startup ecosystem.
- Forecast: By 2030, India could have 10 million AI-assisted creators, regional studios, immersive cultural platforms, and mainstream AI-driven events.
- Industries face a transition from finite to infinite content supply—a challenge requiring new business models oriented more toward commerce than advertising.
- Conclusion: The future will not be defined by tools, but by the stories civilizations choose to tell; India can lead by becoming an AI storytelling civilization.
Panel Discussion: AI and the Creative Economy | India AI Impact Summit
The AI and Creative Economy panel at the India AI Impact Summit 2026 brought together thought leaders from the business, policy, and open knowledge sectors to discuss the ongoing transformation of the creative industries by artificial intelligence. The panel highlighted AI's increasing role in content creation across broadcasting, gaming, music, and storytelling, with up to 48% of production music in television now AI-generated or assisted. Panelists addressed the nuanced impacts of AI on cultural diversity, copyright, and the global intellectual property system. Two core themes emerged: the urgent need for India's rich public domain to be digitized and represented in AI training datasets—leveraging its oral traditions and epic heritage—and the complex ethical and practical dilemmas around the use and attribution of creative works in AI models. Consensus was that while current frameworks struggle to keep pace, technological and governance solutions are needed for creators and industries to coexist, and that India's unique cultural assets present a historic opportunity amid Western legal gridlock. The session called for collaborative approaches grounded in values, transparency, and the acknowledgment of shared creative foundations.
- 42–48% of production music in the broadcast industry is now either AI-generated or AI-assisted.
- AI is altering storytelling, content creation, and gaming, influencing every level of the entertainment ecosystem.
- Panelists included: Nicholas Granitino (Tara Gaming - business), Kenny Chiro Natsum (WIPO - policy), and Anna Tumadir (Creative Commons - open knowledge).
- Debate exists on whether AI strengthens or weakens cultural diversity—current risks lean towards weakening without strong governance and open models.
- There is significant opportunity for India due to its vast public domain heritage (e.g., Indian epics like the Mahabharata, Ramayana, and Gita), which remains largely underrepresented in AI training datasets.
- India's creative traditions and 20% share of the world population offer a strategic advantage if its heritage is digitized and included in global AI models.
- Global intellectual property (IP) frameworks are struggling to address AI-generated content, with immense diversity of views among stakeholders and a lack of consensus on regulation.
- WIPO suggests pragmatic, technological solutions over attempting new international treaties, viewing legal consensus among 194 member states as a 'long journey.'
- Ethical inconsistencies persist, with creators both utilizing and objecting to AI trained on global creative corpora; calls for middle ground and improved credit/attribution mechanisms.
- Recognition of foundational data and contributors (e.g., protein databases in AI science breakthroughs) is often overlooked in favor of platform and model creators.
Keynote Address: Revanth Reddy | Chief Minister, Telangana | India AI Impact Summit
The keynote session, delivered by the Chief Minister of Telangana at the India AI Impact Summit 2026, emphasized India's critical opportunity to lead in the global artificial intelligence (AI) revolution. The speech traced humanity's journey through prior technological breakthroughs, spotlighting AI’s transformative power and warning against India repeating past mistakes of missing major industrial and technological revolutions. The Chief Minister outlined a robust, multi-pronged strategy including the establishment of a national and state-level AI governance infrastructure (such as an AI ministry and council), creation of an AI 'war room' for rapid response, founding of a world-class AI university, manufacturing of GPU chips and securing mineral supply chains, proactive policymaking to track and mitigate AI-induced job losses, and massive investment in reskilling and AI-focused startups. Telangana was positioned as a pioneering state, proposing concrete initiatives like an AI startup village and inviting global partnerships. The speech concluded with a strong advocacy for social justice applications of AI and the need for continuous AI knowledge exchange via more frequent summits hosted across Indian cities.
- India must not repeat the mistakes of missing previous industrial and technological revolutions and should aim to lead in AI across all layers: chips, energy, data, platforms, applications, and services.
- Proposal to create a 'war room' for AI at the national level, with Hyderabad as a hosting city, to monitor and rapidly respond to AI developments.
- Call for establishment of a global standard AI university in India with a focus on original research.
- India should begin manufacturing GPU chips and securing supply of rare minerals to become an integral part of the AI hardware supply chain.
- Immediate need for systems to estimate and address AI-induced job losses and launch large-scale reskilling programs.
- Recommendation to institute a national AI fund specifically for startups, with Telangana offering to create an AI startup village.
- Proposal for an increase in the frequency of AI summits to every six months, rotating across different Indian cities.
- Call to form a national AI council akin to the GST council or NITI Aayog, and to establish dedicated AI ministries at both central and state levels.
- Emphasis on leveraging AI for social justice, inclusion, and poverty alleviation.
- Open invitation to national and international institutions to partner and establish AI initiatives in Telangana.
Keynote by Uday Shankar | Vice Chairman, JioStar India | India AI Impact Summit
The speaker, a veteran media professional, addressed the India AI Impact Summit 2026, applauding the Prime Minister's vision to integrate AI into India's growth agenda and highlighting the transformative impact of technology on India's media and entertainment industry. He recounted the sector's rapid evolution, citing its rise to a $30+ billion valuation and its position as the world's fifth largest media market. However, the speaker acknowledged India's limitations in global content influence due to capital constraints, talent deployment, and a domestically-focused mindset. AI, he argued, provides an unprecedented opportunity for India to leap from being predominantly a domestic player to becoming a global creative powerhouse, fundamentally rewiring the pillars of content creation, consumer engagement, and business monetization. He emphasized how AI-powered production at Geostar led to increased efficiency and scale, and proposed that true globalization requires India to disrupt itself, develop AI-native creative talent through upskilling, and build regulatory frameworks tailored to India's unique context. The speaker concluded with a call for ambition and unity among all stakeholders, asserting that AI represents a generational chance for India to lead the global media industry.
- India's media and entertainment sector has evolved into a $30+ billion industry with over 900 channels and 800 million video viewers.
- Geostar has invested over $10 billion in content in the past three years, with ambitions to continue.
- Despite growth, India's global content market share remains below 2%, versus a global media market nearing $3 trillion (projected $3.5 trillion by 2029).
- AI-powered production (e.g., Mahabharat EDMU) at Geostar cut lead times by 3-5x and delivered global scale content more efficiently.
- Only 3-5% of average Hollywood film/tv production budgets are available to Indian studios due to capital constraints.
- AI is positioned as the catalyst to surpass infrastructure, capital, and talent limitations—transforming content, consumer engagement, and commerce.
- Dynamic pricing, advanced segmentation, and interactive storytelling are named as AI-driven opportunities to unlock new value.
- AI-native creative talent (merging technical and artistic skills) is deemed critical for India's global ambitions; calls for mass upskilling and fusion of creative/engineering domains.
- Regulatory frameworks should be tailored to India's needs—resisting wholesale adoption of western models and ensuring they accelerate rather than hinder growth.
- India's lack of legacy media and IP 'baggage' is highlighted as an advantage for rapid AI adoption versus the West.
- A unified stakeholder approach—disrupting proactively, nurturing talent, and supportive policies—was advocated as essential for India to claim a leading role.
- AI is described as a generational equalizer, providing India the opportunity to lead rather than follow in the next era of global media.
Skilling and Education in AI
The session at the India AI Impact Summit 2026 focused on the transformative potential of AI across key sectors such as agriculture, small businesses, education, and healthcare, emphasizing both opportunities and critical challenges. Highlighting agriculture as a sector with immense productivity gains possible through AI-driven pest detection and localized solutions, the speakers also addressed AI's potential to empower small businesses and upskill India's vast workforce. Leaders from NSDC and NCBT detailed comprehensive efforts to integrate AI into career guidance, vocational training, and certification, outlining multi-tiered skilling frameworks to democratize AI access from schools to engineering colleges, and grassroots professions. A central theme was the looming 'trust gap,' underscoring the necessity of building robust trust infrastructure—on par with digital public infrastructure—to foster AI adoption, avoid reinforcing inequalities, and maximize India's demographic and trust dividends. Concerns around the risk of AI amplifying societal and geographic inequalities, governance of data, and the need for rapid, adaptable standards and certifications were highlighted as essential for inclusive AI transformation.
- AI in agriculture could dramatically boost productivity; smallholder farmers lose 40-50% output to pests, and AI solutions in local languages could reduce this significantly, directly increasing incomes.
- Small businesses can operate more efficiently and independently, leveraging AI for market research, analysis, and operational support.
- NSDC, with 36 sector skill councils and 400 training partners, is focusing on: AI-enabled career guidance, AI skilling programs at multiple levels, AI-driven transformation of training and assessment, and AI-powered outcome monitoring via platforms like SID.
- 10,000 engineering students are enrolled in future skills AI centers, collaborating with major industry players (e.g., Microsoft, Google, Amazon) for credit-based AI architect programs.
- NCBT has launched a multi-level AI skilling framework—'skilling for all,' 'skilling for many,' and 'skilling for few'—making AI literacy accessible to diverse professions (e.g., tailors, plumbers, beauticians), with over 200,000 registered for basic courses.
- AI's risk in perpetuating or amplifying existing inequalities (social, geographic, access, and environmental) was flagged, since AI systems reflect historical biases and are concentrated in a few global regions.
- Trust infrastructure, encompassing transparency in AI systems, data handling, and outcomes, is critical for adoption—India’s digital trust levels (~70%) vastly outpace those in the US (~25-30%), creating a significant advantage.
- Certification and accreditation face challenges due to rapidly evolving skills needs; frameworks and micro-credentials are being piloted to keep standards current and widely recognized.
Panel Discussion: Inclusion, Innovation & the Future of AI | India AI Impact Summit
This session at the India AI Impact Summit 2026 centered on reconciling excellence and inclusion in AI policy, governance, and infrastructure. Panelists emphasized that AI's societal impact transcends technical and economic domains—touching aspects from regulatory adequacy and risk management to democratized access, skills development, and collaborative governance. There was consensus that the prevailing narrative about AI being insufficiently regulated needs reconsideration; existing legal frameworks, if systematically applied and updated, can address many risks. At the same time, participants underscored the distinct role of governments in investing in foundational research, incentivizing broad-based participation, ensuring equitable compute and infrastructure access, and correcting market distortions such as monopolization. The discussion further highlighted that responsible AI requires actionable governance beyond policy statements—embedding inclusivity, transparency, employee engagement, and measurable mechanisms for oversight. Moreover, compute infrastructure was identified as becoming mission-critical, akin to utilities like ports or railways, warranting public-private partnership and regulatory focus for national competitiveness and equity.
- AI policy must move beyond binary debates around immediate regulation, recognizing that much of AI is already governed by existing legal traditions (e.g., common law, liability doctrine).
- Panelists championed the presumption that current law is generally sufficient for AI oversight unless proven otherwise by clear threat models.
- Government’s role in innovation is pivotal—not only regulating but also investing in infrastructure, skills, and foundational research (as evidenced by DARPA and public sector innovation in the US).
- Addressing market distortions (monopolies, oligopolies in AI) is essential; government policies should target open access, skills development, and digital literacy to prevent AI benefits from accruing only to a 'lucky few.'
- AI governance frameworks in enterprises are evolving beyond compliance and risk management to become strategic capabilities, emphasizing the embedding of privacy, security, and transparency at both design and deployment phases.
- Effective AI governance includes employee participation and feedback, fostering a culture of trust and ability to respond to discrimination, bias, or system errors.
- Panelists agreed that compute infrastructure should be treated as critical public infrastructure, necessitating government oversight and partnership with industry to ensure equitable access and national security.
- Moving forward, AI governance must engineer inclusivity and fairness by design, not as an afterthought, with attention to the risks of disinformation, deepfakes, and automated inequality.
Panel Discussion: AI & Cybersecurity | India AI Impact Summit
The session at the India AI Impact Summit 2026 focused on global efforts to democratize access to artificial intelligence (AI) education and capacity building, with special emphasis on inclusivity across all ages, genders, and geographies. Key policy moves include the integration of AI training into India's national curriculum from grade 3 onwards and widespread retraining through higher education. Internationally, the launch of a global network for AI capacity-building centers—initiated by Saudi Arabia and Kenya with early support from India—aims to empower countries of the Global South and address the digital divide through skills, infrastructure, and shared expertise. Concrete successes, such as Saudi Arabia's Women Elevate program and India's robust ITC training initiatives, showcase the tangible impact of collaborative, scalable AI education, while experts stress the necessity for continuous adaptation of teaching methods and inclusive, evidence-driven scientific engagement.
- India will introduce AI education at all levels, including making it compulsory from grade 3 in schools and integrating it into all higher education courses, aiming for universal AI literacy.
- National retraining initiatives are in progress to upskill existing workers through India's higher education institutions.
- A new global network for centers of exchange and cooperation on AI capacity building has been launched, spearheaded by Saudi Arabia and Kenya, with UN and UNESCO collaboration.
- India's ITC program has provided AI training to thousands of officials from 160 countries, and annually offers 10,000 fully-funded training slots across 400 courses in 100 Indian institutes.
- Saudi Arabia's Women Elevate program trained 6,000 women in over 86 countries in AI fundamentals in the past year, aiming for 25,000 women globally in three years, with high completion and certification rates.
- India, Brazil, China, Ethiopia, Guinea, Kazakhstan, Kenya, Rwanda, Saudi Arabia, Senegal, Slovakia, South Africa, Trinidad & Tobago, and Vietnam have all nominated institutions to the new AI capacity building network.
- The global network’s focus is to ensure equitable AI advancement and mitigate an emerging AI capacity divide between developed and developing countries.
- Expert panels stress that capacity for both developing and using AI must be built everywhere and that new approaches to teaching, especially in self-learning-driven environments, are essential.
- Collaborative initiatives specifically target the Global South, diverse age groups, and women, aiming to leave no one behind in the AI revolution.
- Evidence-driven, science-based frameworks will guide AI capacity development to ensure global representation and impact.
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance | India AI Impact Summit 2026
This session at the India AI Impact Summit 2026 focused on the establishment and significance of a new AI 'code of practice', with insights from lawmakers, scholars, and industry experts. The discussion emphasized a collaborative approach to AI governance, involving stakeholders from civil society, academia, industry, and public institutions. The code is designed to provide clear yet flexible standards for reducing existential and systemic risks associated with AI, particularly those affecting democracy and fundamental rights, while promoting innovation. Notably, effective AI governance was stressed as demanding international cooperation, clear implementation frameworks, and a focus on specific use cases where trust and safety controls can differ substantially. Panelists underscored the urgency for leaders worldwide to create conditions enabling AI companies to prioritize safety even amid geopolitical competition, calling for ongoing commitment to trust-building and coordinated policy action, especially regarding sensitive domains like military applications.
- Introduction of an AI 'code of practice' developed through a collaborative process with civil society, industry (including SMEs and large companies), and academia.
- The code aims to prevent both existential and systemic risks (e.g., threats to democracy, misinformation, cybercrime) while ensuring innovation and protection of human rights.
- Emphasis on providing sufficient resources for the European AI office to implement and enforce the code effectively.
- Recognition that while many companies are already complying with risk mitigation measures, governments must match the capacity of powerful private actors to enforce regulations and build public trust.
- Urgent recommendation for global leaders to create preconditions enabling companies to prioritize safety and coordinate internationally, including with peers in China and elsewhere.
- Calls for policies to focus on domain- and context-specific trust and safety mechanisms, especially in high-impact sectors like medicine vs. customer service.
- Highlight of the necessity for international cooperation, especially regarding high-risk domains such as military AI and loss of control scenarios.
- Summit's main takeaway: innovation and trust are not mutually exclusive, and the code of practice could serve as a potential global standard.
- Panelists agreed on the need for public institutions to take leadership in AI governance rather than leaving the initiative solely to private sector actors.
Keynote by Marcus Wallenberg | Chairman, SEB & Saab | India AI Impact Summit
The transcript features a keynote address drawing parallels between Sweden's AI research-driven approach and India's applied software strength, suggesting a strong synergy between the two nations for AI innovation. The speaker highlights the WASP program's significant impact on AI talent development in Sweden, with one PhD graduate per week, and discusses how India's robust IT services sector positions it well for rapid AI diffusion. The address underscores the potential for Indo-Swedish collaboration across basic research, IT services, and applied AI, especially to remain competitive against global challenges such as inexpensive Chinese exports. The speaker identifies key industrial and societal opportunities for AI in sectors like life sciences, defense, and telecommunications, exemplified by real-world applications such as AstraZeneca's pharmaceuticals and Saab's AI-driven fighter aircraft. Lastly, the significance of AI in future 5G/6G networks is emphasized, framing AI as essential for both economic competitiveness and transformative breakthroughs across industries.
- India's AI summit is characterized as a strategic national initiative akin to 'Make in India,' positioning AI as a driver for long-term goals.
- Sweden's WASP program, a decade-old initiative, has led to one PhD in AI being graduated per week, emphasizing foundational AI research.
- Sweden and India are positioned as complementary: Sweden excels in R&D and academia while India leads in applied software and IT services.
- There is a robust potential for deeper Indo-Swedish cooperation in AI research and application, leveraging mutual strengths.
- Industrial competitiveness with China—especially against cheaper Chinese exports—will depend on rapid AI diffusion and innovation.
- AI is enabling transformative changes in sectors such as life sciences (accelerating drug discovery, personalized medicine), defense (Saab's AI-controlled aircraft), and telecommunications (Ericsson's 5G/6G networks).
- By 2025, AI agents have already been used in mission-critical control for fighter aircraft (Saab's Grien), demonstrating real-world industrial AI.
- The address foreshadows AI's broad societal impact, from businesses to healthcare and public services, thanks to advancing AI and data-driven telecommunications.
NextGen AI: Skills, Safety, and Social Value - technical mastery aligned with ethical standards
The session at the India AI Impact Summit 2026 focused on the current talent gap in the Indian AI ecosystem and highlighted national and institutional efforts to address this gap. Moderated by Sabot, Director of STP Headquarters, the conversation brought together leaders from academia, government, startups, and industry to discuss the attributes required for 'NextGen AI' talent. The discussion emphasized the importance of critical thinking, risk-taking, ethical judgment, foundational skills in technology, and domain specialization alongside practical problem-solving and adaptability to evolving regulations. Significant announcements included updates on India’s 10 lakh AI skilling drive, STPI’s Skill Up initiative, several regional AI training hubs, and expansion of a training partner network. Panelists underscored the importance of moving beyond surface-level AI usage towards deeper understanding, creative application, and inclusive growth (including vernacular language support and bridging AI divides). The value of strong foundations, lifelong learning, and real-world industry alignment was a key theme, and the need for AI education to balance tech mastery with ethics and critical evaluation was reiterated.
- STPI announced the imminent launch of multiple regional hubs for AI training across India as part of its Skill Up initiative.
- The current training partner ecosystem under STPI has expanded to 18 partners with more to be added.
- India's ongoing '10 lakh AI skilling drive' and the updated 'Skill India Digital' program were highlighted as pillars of national AI talent development.
- Panelists agreed that the most critical requirements for next-gen AI talent are critical thinking, foundational understanding, risk-taking, ethical judgment, real-world application, and familiarity with sector-specific regulations.
- There is a pronounced shift from learning libraries/tools to mastering foundations, creativity, and domain-centric problem solving in AI education and evaluation.
- NextGen AI was conceptualized as the 'infrastructure for intelligence' capable of reducing digital and AI divides, especially through vernacular language and inclusion.
- Academic leaders called for a T-shaped talent profile—deep domain expertise paired with broad AI skills and awareness of risk and containment practices.
- Government perspectives reinforced the need for adaptive regulation awareness and the importance of interoperable standards in technological innovation.
ElevenLabs Voice AI Session & NCRB/NPMFireside Chat
The session focused on the critical issue of language accessibility and inclusivity in India's digital space, highlighting the substantial linguistic diversity of the country and the digital exclusion caused by English-dominated online content. The Bhashni translation plug-in, built atop 350+ language models, was presented as a game-changing, lightweight solution that empowers websites to instantly offer content in all 22 Indian scheduled languages, requiring no back-end expertise—only a simple copy-paste. The plug-in is already integrated with over 400 government and institutional websites and serves over 24 million translation inferences, underlining its adoption. The plug-in supports source-to-target translation without English as an intermediary, customizable interface for regional preferences, skip-translation options for certain web elements, DBM accessibility compliance, and smooth integration with multilingual and multi-page sites. Real-world use cases, such as improved access to government forms for non-English users, illustrate its tangible impact. The Bhashni initiative positions language infrastructure as foundational to digital inclusion and aims to dismantle language barriers for over 800 million Indians who are not fluent in English.
- India faces a major digital divide with 800+ million people not fluent in English, while 95% of online content remains English-centric.
- The Bhashni translation plug-in, powered by 350+ language models, is operational on 400+ websites, enabling instant translation into all 22 scheduled Indian languages.
- The plug-in is extremely lightweight, requiring only a simple copy-paste of code for any website integration—no back-end overhaul needed.
- It is fully DBM (Digital Brand Identity Management) compliant, ensuring accessibility for persons with disabilities.
- Over 24 million translation inferences and 1.5 million glossaries have been generated through the platform.
- Plug-in supports direct translation between any two Indian languages (not just via English).
- Custom features: ability to exclude sections like calendars or emails from translation, reorder language lists to prioritize regional languages, and limit available languages if required.
- For multi-lingual websites, translation can be skipped for content already present in the source language.
- Portals with interactive forms can avoid data loss by disabling page reloads when language is switched.
- Real-world use case: Farmers accessing government forms in their native language for schemes like PM Kisan Samman Nidhi without language barriers.
Partnering on American AI Exports | Powering the Future | India AI Impact Summit 2026
The session at the India AI Impact Summit 2026 underscored the growing strategic and technological partnership between India and the United States, highlighted by the signing of the 'Pax Silica' declaration. Key leaders including Ambassador Gore, executives from Micron, and Indian industry and government representatives discussed the limitless potential of US-India collaboration, particularly in AI, semiconductors, and resilient supply chains. Micron emphasized its $2.75 billion investment in India's Sanand facility and the resulting advancement in memory and chip technologies. Broader supply chain security, trust, and democratization of technology remain central, with both countries aligning values to avoid over-dependence and ensure technology benefits all. The session also noted over $25 billion in ongoing semiconductor investments and India's rapid rise in manufacturing—including mobile phones and AI-specific chips. Panelists lauded the timely formation of Pax Silica and highlighted India’s elevation of AI to a strategic national capability, emphasizing mutual commitment to innovation, trusted technology, and responsible AI development. The session set the tone for further high-level US-India AI cooperation and reinforced the summit as a platform for shared global technology leadership.
- Signing of the 'Pax Silica' declaration, a landmark US-India agreement on technology, semiconductors, and supply chain resilience.
- Announcement of Micron's $2.75 billion investment in its Sanand, Gujarat, assembly and test facility, projected to assemble and test hundreds of millions of chips.
- India's total semiconductor investment exceeds $25 billion across 10 factories, including Micron and Tata Electronics.
- Rollout of India's first AI-enabled semiconductor fab and indigenous packaging technology in Assam, supporting both US and Indian companies.
- India now produces 1.5 million engineers annually; 20% of global semiconductor design is driven by Indian talent.
- India produced $70 billion worth of mobile phones in the last year, with $30 billion exported.
- Key partnerships between India and leading US semiconductor firms, including Analog Devices, Qualcomm, Synopsys, and Intel.
- Session emphasized the need for trusted, resilient, and democratic technology value chains, learning from the pandemic and geopolitical upheavals.
- The US administration signaled sustained commitment and high-level engagement with India on AI over the next three years.
- Panelists framed AI as a world-changing revolution, likening its impact to historical technological shifts.
Designing the AI Factory: Scaling Compute to Sovereign AI
This session at the India AI Impact Summit 2026 provided an in-depth discussion on the evolution, challenges, and opportunities presented by AI in India, emphasizing the transition from labor-based to value-based business models in IT services. The speakers highlighted the pervasiveness of AI in everyday products, the unique importance of supporting AI in Indian languages to unlock national potential, and the mission-critical need for a sovereign AI infrastructure. There was widespread praise for the government's proactive approach, particularly in democratizing access to GPUs and data, supporting startups, and enabling the development of Bharat GPT—a large language model grounded in Indian data and languages. The session also addressed concerns about job losses, framing AI as a tool for redefining and expanding work opportunities. Forward-looking topics included India's rising status as a global AI solutions hub, the potential formation of an AI Ministry, grassroots AI adoption, and the ethical and practical imperatives of data privacy and sovereignty. The session closed with an introduction to an AI-powered child-friendly assistant exemplifying safe, responsible AI deployment.
- AI is now omnipresent in daily products and applications, often used unknowingly by consumers.
- AI will not eliminate jobs but redefine and expand opportunities, requiring shifts from effort-based (per hour/day) to value-based business models, especially in IT services.
- Most global AI models are English-focused; India's AI growth depends on models working in diverse Indian languages — Bharat GPT now operates in 14+ languages, started with Hindi and English.
- Only 10% of Indians are fluent in English, making AI that works in Indian languages essential for true national impact and inclusion.
- Prime Minister’s address at the AI Summit was distributed in at least 15 Indian languages, emphasizing inclusivity.
- There is strong advocacy for the development of 'sovereign AI'—national AI models and infrastructure, independent of foreign dependencies.
- India’s large and aspirational population is a unique asset for producing vast, diverse data to train advanced AI models.
- India is anticipated to become a global hub for real-world AI solutions within the next two years.
- The summit was attended by international leaders and AI ministers from the UK, Canada, and France, indicating India's emergence as a global AI focal point.
- Call for the formation of an AI Ministry within India to further consolidate government AI strategy and initiatives.
- Government praised for providing free GPUs, funding for model development, and opening up opportunities for startups and innovators.
- Focus on building core infrastructure: compute power, energy, foundational models, and mass skilling for AI.
- Policy recommendation to shift funding emphasis from tech development to incentivizing adoption and real-world application deployment.
- Bharat GPT highlighted as a pan-Indian project—built from the nation’s data and voice contributions, free to use via platforms like Hugging Face.
- Introduction of a child-focused AI assistant—prioritizing privacy, safety, and responsible AI, with full parental control.
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT
This session at the India AI Impact Summit 2026 provided a deeply insightful look into India’s current trajectory and future ambitions in artificial intelligence. The dialogue centered on India’s AI innovation story—highlighting grassroots adoption, language inclusivity, data sovereignty, and the transformative potential of AI for both business models and daily life. The speakers emphasized the India-centric large language model, ‘Bharat GPT’, now used by 1.3 billion people in over 14 Indian languages, as an inclusive success narrative. Policy discussions focused on India’s unique approach: providing open access to compute resources, state-led foundational efforts, and fostering AI applications that address real-world problems. There was robust debate on AI’s impact on jobs—framing it not as elimination but as a refinement of opportunities and a call for shifting business models from time-based to value-based pricing. Sovereign AI, data openness, and government leadership were recognized as crucial, with praise for India’s proactive and forward-thinking AI policies. The session concluded with a rallying call to build for the “bottom of the pyramid”, leverage India’s massive data assets, and to embrace the current moment as India’s time in the global AI race.
- Bharat GPT has over 1.3 billion users and supports 14+ Indian languages, enabling AI access for diverse populations.
- More than 500 developers and enterprises are utilizing the Bharat GPT platform.
- India’s AI models focus on Indian languages—a critical differentiator, as 90% of Indians aren’t fluent in English.
- AI is already present in everyday products and applications, with integration expected to deepen over the next 5-6 years.
- Debunked fears of AI eliminating jobs, stating automation enables faster problem-solving and new opportunity creation, urging a shift from time-based to value-based business models (e.g., per hour to value delivered).
- The government’s unique approach includes providing free GPUs and funding for data/model development—ahead of global peers.
- Sovereign AI (“Atmanirbhar AI”) is key, ensuring India’s independence from foreign tech and the ability to export AI models to other countries.
- India has a data advantage due to its population and content generation, positioning it to build AI models effectively.
- Calls for focus on AI infrastructure (compute, energy), foundational models, applications, and AI skilling.
- The Indian government is praised for proactive, anticipatory policy—delivering support before formal industry requests.
- Recommendation to shift government funding from startups to organizations/users deploying and adopting AI applications.
- AI summit’s international success reflected in attendance from UK, Canada, France AI ministers; suggestion to establish a dedicated AI Ministry in India.
- Vision for Bengaluru as India’s AI capital, and for India to become a global AI hub in the coming years.
- Encouragement to ignore critics and focus on execution, asserting that India’s AI moment is now.
Keynote Addresses at India AI Impact Summit 2026
The session at the India AI Impact Summit 2026 marked a historic deepening of US-India technological collaboration with the formal signing of the 'Pax Silica' declaration. Key speakers, including Google CEO Sundar Pichai, Micron CEO Sanjay Mehrotra, US Under Secretary of State Jacob Helberg, and US Ambassador Sergio Gore, emphasized the shared commitment to building a resilient and inclusive AI ecosystem rooted in secure supply chains, infrastructure investment, and talent development. Announcements included the operationalization of a $15 billion Google AI infrastructure hub in Visakhapatnam, Google's goal to equip 10 million Indian leaders with AI skills, the expansion of digital connectivity via new sub-sea routes, and major investments from Micron in semiconductor manufacturing in Gujarat. The Pax Silica declaration is positioned as a foundational document for strengthening economic security and self-determination, ensuring that advanced technologies benefit democratic societies. Recent policy advancements, such as India's accession to Pax Silica and the conclusion of a new US-India trade agreement, lay the groundwork for a robust, innovation-driven partnership poised to shape global AI and technology trajectories.
- Signing of the Pax Silica declaration, formalizing a US-India alliance for secure, resilient technology ecosystems.
- Google announced a $15 billion investment in Indian AI infrastructure, centered on a new AI hub in Visakhapatnam with gigawatt-scale computing.
- Launch of the 'India America Connect' initiative to expand digital trade and connectivity between the US, India, and the Southern Hemisphere.
- Google and partners committed to equipping 10 million Indian leaders with AI skills, including a Google AI certificate in partnership with Vadwani AI.
- 22 Gemma AI models contributed by Google to Indian AI companies; Gemini app now available in 10 Indian languages.
- Micron Technology announced $2.75 billion investment in a semiconductor assembly and testing facility in Sanand, Gujarat, with a 500,000 sq. ft clean room.
- Micron's India R&D teams have contributed nearly 2,000 patents since 2019, supporting global AI and memory technology.
- Emphasis on securing AI supply chains (minerals, chips, infrastructure) to prevent economic coercion and build national resilience.
- Recent conclusion of an interim US-India trade agreement to enhance economic ties and technological cooperation.
- US and India pledge a pro-innovation approach to AI, fostering open markets, democratic resilience, and self-determination.
Panel Discussion: Data Sovereignty | India AI Impact Summit
The session focused on the complex and evolving concept of sovereign AI infrastructure, particularly in the context of India and emerging markets. Panelists discussed the critical need for countries to maintain control over their digital infrastructure, especially compute resources and data, without isolating themselves from global collaboration. There was consensus that sovereignty does not mean total self-sufficiency, but rather strategic control, clear ownership policies, and trusted and transparent partnerships in supply chains. Real-world examples highlighted India's approach to migrating key platforms (like the Pashini language platform) onto domestically controlled data centers. Panelists from Africa emphasized the importance of designing AI systems that cater to local needs and lived experiences, especially in regions with limited compute but rich use cases and data. The session concluded with the view that sovereignty, especially in AI, hinges on trust, operational transparency, and the assurance that no single external entity controls a nation's digital destiny.
- Sovereignty in AI is about strategic control of infrastructure, data, and rule-making—not isolationism.
- India is prioritizing local compute resources and data storage, aiming for control over key digital infrastructure.
- It was noted that 95% of India's AI use cases can be serviced by models between 20 to 100 billion parameters, reducing need for frontier, trillion-parameter models.
- Example: India migrated its public AI language platform, Pashini, from a hyperscale global cloud provider to a domestically controlled data center, enhancing sovereignty.
- African perspective: While regions may lack compute resources (e.g., Africa at 1% capacity), they possess unique data and pressing use cases, especially in sectors like health.
- Sovereign AI design must account for local languages, context, and user needs; local builders have a unique role.
- Sovereignty requires robust guardrails, operational transparency, and assured trust in both public and private infrastructure; ownership must be strategic and targeted.
- Supply chain trust is crucial—sovereignty includes not just software/data, but hardware and network components.
- True sovereignty is about building digital public infrastructure with strong government guardrails, innovation in industry, and global partnerships governed by trust and transparency.
Keynote by Mathias Cormann | OECD Secretary-General | India AI Impact Summit
The session at the India AI Impact Summit highlighted India's global leadership in artificial intelligence (AI), with the OECD's Secretary General commending India's pivotal role in fostering international collaboration on AI policy. The OECD is deeply engaged in supporting policymakers globally through data-driven analysis, tracking investment and technological trends, and developing frameworks to promote responsible adoption while mitigating risks, such as job displacement and automation. Substantial growth in AI investment and adoption was reported, with AI now accounting for 61% of global venture capital investments and the number of AI incidents increasing significantly in recent years. The launch of key tools—including the new OECD AI Index, an interactive policy toolkit, and the Equitable AI Transitions Playbook—underscored the focus on benchmarking, best practices, upskilling, and building global standards for responsible, inclusive AI. International cooperation through initiatives like the Global Partnership on AI (GPI), as well as new guidance for responsible innovation and due diligence, further illustrated efforts to align innovation with safety, ethics, and societal benefits.
- India praised for global leadership in AI policy collaboration after recent summits in the UK, Korea, and France.
- OECD estimates strong AI adoption could boost labor productivity by up to 1 percentage point per year across OECD and G20 countries over the next decade.
- Nearly $750 billion in AI infrastructure investments planned by big tech companies this year.
- 61% (or $259 billion USD) of global venture capital now goes into AI firms, up from 30% three years ago; US firms receive 75% of this AI VC funding.
- Reported AI incidents increased from 92 to 324 per month on average between 2022 and 2025.
- OECD has released a new AI Index for policymakers and will launch an interactive evidence-based toolkit containing global best practices.
- The Global Partnership on AI (GPI) expands to 46 countries with Malta and Saudi Arabia joining.
- OECD's Hiroshima process reporting framework is being updated to support SME adoption and due diligence guidance for responsible AI is now published.
- 27% of jobs are at high risk of automation from AI, yet only 23% of adults with low literacy participate in relevant training compared to 61% among higher-literacy adults.
- OECD and ILO have developed the 'Equitable AI Transitions Playbook' to support upskilling and reskilling efforts.
- Session transitions to a panel on data sovereignty with leaders from Yota Data Services, Kala Limited, LNT, Vioma, and the Center for Legal Policy.
Artificial General Intelligence and the Future of Responsible Governance
The session at the India AI Impact Summit 2026 focused on the accelerating advancements in artificial intelligence and the growing discourse around Artificial General Intelligence (AGI). Panelists acknowledged the rapid evolution of AI over the past three years and discussed the urgent need for clarity, governance, and preparedness for societal and ethical challenges associated with AGI. Definitions of AGI centered around the ability for machines to perform diverse human tasks with professional accuracy, adaptability, and reliable reasoning across domains. The timeline for achieving AGI varied among the experts, with estimates ranging from 3 to 10 years, influenced by factors such as technological investment, algorithmic innovation, and efficient use of compute resources. The panel also emphasized that the path to AGI involves not only technical advancements in compute and algorithm design, but also critical investments in education, privacy safeguards, and human-centered skills to manage trust, context, and ethical implications. The session concluded by highlighting the importance of a holistic approach: balancing massive investments in AI infrastructure with robust frameworks for privacy, critical thinking, and context-aware deployment suited to India's diverse social fabric.
- AI breakthroughs and accelerated progress since 2020 have intensified discussions on AGI, with significant milestones observed in 2026.
- AGI is commonly defined as an AI system capable of performing any human task at a professional level, with general adaptability, context awareness, reasoning, and reliability.
- Estimates for reaching AGI range from 3 to 10 years, depending on investment and technological trajectory.
- The definition and public perception of AGI are evolving, with increasing trust in Generative AI tools (e.g., 50% of Israelis reportedly trust Gen AI more than friends or professionals).
- Key technical hurdles to AGI include developing neuromorphic hardware, low-latency and energy-efficient architectures, embodied and multimodal learning, and context interpretation.
- "Accuracy nines": Moving from 90% to 99.999...% accuracy requires years of sustained effort for each decimal place—AGI will need consistency and reliability near human levels.
- Market disruption due to AI is compared to previous technological shifts, but it is faster and broader due to pervasiveness.
- The role of compute (processing power and infrastructure) is critical but must be considered alongside data, energy, privacy, and education.
- There is a risk of infrastructure overinvestment (possible bubble), but leaders believe excess compute capacity will find use as AI evolves.
- Ethical frameworks, privacy, critical thinking, and human-centric education are as important as technical investment to ensure safe and effective AI adoption in India.
Empowering India & the Global South Through AI Literacy
The panel discussion at the India AI Impact Summit 2026, featuring leaders from the Central Square Foundation (CSF), Vadwani School of AI, Transform Schools, and Chrysalis, focused on the transformative role of artificial intelligence (AI) in Indian education, particularly in government and low-fee schools. The conversation emphasized the importance of AI literacy for students, teachers, and parents, drawing on large-scale programs like 'AI Summer,' which aims to equip millions with essential AI knowledge and skills. Panelists shared real-world examples from Odisha, highlighting how students like Shreddha and Punam are using AI to build curiosity, confidence, and agency in their learning. Teachers, previously wary or under-informed, are now finding AI to be an empowering assistant rather than a threat, aiding lesson planning, diagnostics, and inclusive pedagogy—especially where resources are limited. Multilingual AI tools are increasing accessibility and equity, while responsible engagement and awareness of AI ethics are also being prioritized across stakeholders. The discussion concluded that AI’s integration into classrooms is inevitable, and ensuring equitable, responsible AI literacy is key to leveraging its full potential for social impact.
- Central Square Foundation’s 'AI Summer' program is delivering AI literacy curriculum at scale, in partnership with Vadwani School of AI.
- Transform Schools has impacted over 30 million students across seven years, with the AI Summer program alone reaching 0.9 million students in government schools.
- Real-life examples from Odisha show AI literacy transforming curiosity into learning confidence and practical application for students, particularly first-generation learners.
- AI tools are enabling meaningful personalization in education—tailored lesson plans, assessments, and learning pathways—addressing India’s high student-teacher ratios.
- AI’s multilingual and voice capabilities are lowering barriers for underserved communities, engaging not just students but also parents in the learning process.
- Teachers in both government and low-fee private schools are moving from fear and resistance to confidence and agency, seeing AI as a productivity enhancer rather than a threat.
- Panelists emphasized the need for responsible AI engagement, including ethics and bias education, to ensure safe and equitable use.
- There is now consensus that AI will remain pervasive in education, making widespread AI literacy programs urgently necessary for all stakeholders, not just students.
AI for Democracy: Reimagining Governance in the Age of Intelligence
The session at the India AI Impact Summit 2026, featuring distinguished speakers including Om Birla (Speaker of the Parliament of India), Martin Chungong (Secretary General, IPU), Dr. Chinmay Pandya (All World Gayatri Pariwar), and Sofia Jaminia (Human AI Foundation, Mexico), focused on 'AI for Democracy: Reimagining Governance in the Age of Intelligence.' The discussions highlighted both the promise and peril of AI integration into democratic governance, especially regarding transparency, accountability, and inclusive participation. Speakers emphasized the urgency of establishing robust global, technological, public, and civic governance frameworks for AI to align the technology's deployment with democratic values. Major concerns included the risk of misinformation, polarization, concentration of power, and the potential erosion of democratic institutions if AI remains largely in the control of a few corporations and states without sufficient oversight or inclusive governance. The dialogue underscored the need for measurable global standards, ethical boundaries, and multi-stakeholder collaboration to ensure that AI genuinely serves democracy, rather than undermining it, particularly in the Global South. The session marked the culmination of the summit by encouraging collective responsibility and action-oriented engagement to shape AI’s impact on democratic processes.
- Dignitaries present included India's Parliamentary Speaker Om Birla, IPU Secretary General Martin Chungong, Hungarian Parliament Deputy Speaker Lajos Olah, Dr. Chinmay Pandya, and Sofia Jaminia.
- The theme was 'AI for Democracy: Reimagining Governance in the Age of Intelligence,' focusing on how AI should serve democratic principles—transparency, accountability, inclusivity—not undermine them.
- Sofia Jaminia stressed the importance of global participatory governance frameworks and binding agreements for AI, beyond just principles and guidelines.
- Dr. Pandya highlighted India's historical role as the world's largest democracy and the first to establish democratic governance (Vaishali), as well as the AI summit's historical significance.
- Concerns were heavily emphasized on AI's ability to amplify misinformation, deepen polarization, and manipulate public opinion, with references to real-world incidents like AI's interference in Romanian elections.
- Panelists called for governance at multiple levels: institutional/regulatory, technological (who codes the values), civic (raising digital literacy), and global (addressing cross-border AI impacts).
- Special warnings included AI’s 'black box' nature, limited public understanding, the erosion of accountability, and the risk that AI could gradually undermine democratic systems if not governed collectively.
- The session urged that AI be developed and deployed with collective intelligence and multi-stakeholder (government, technologists, civil society) participation—particularly to protect the interests of the Global South.
- A central question raised was not just how AI will influence democracy, but how democratic values will guide and shape the future of AI itself.
Regulating Open Data: Principles, Challenges, and Opportunities
The session at the India AI Impact Summit 2026 focused on the critical debate surrounding government open data policy in the AI era. Using a creative scenario with characters from British political satire to frame the issue, speakers explored whether moving from voluntary open data initiatives to a regulatory mandate is necessary for India’s digital and AI ambitions. The keynote by Dr. Shashi Tharoor stressed that open data, thoughtfully designed and regulated, can be transformative public infrastructure—enhancing government transparency, fostering innovation, and enabling citizen participation. However, Tharoor emphasized that unchecked or poorly structured openness can introduce vulnerabilities, exacerbate inequalities, and lead to loss of sovereignty, especially when foreign tech firms dominate data processing and AI value creation. He advocated for a regulatory framework that balances openness with robust safeguards: clear purpose, privacy protection, accountability, domestic capacity-building, and preservation of policy space for development. The need for structured openness—supporting both cross-border cooperation and local empowerment—was underlined, referencing India’s digital public infrastructure journey, the G20 New Delhi Leaders’ Declaration, and the evolving global consensus on trustworthy, equitable data governance. The panel called for India to adopt a strategic approach to open data, ensuring that digital sovereignty and inclusion remain at the core of its AI-powered future.
- Debated shift from voluntary open data initiatives to statutory regulatory frameworks for Indian government data sharing.
- Regulatory discussion centered on mandatory, standardized, and accountable data sharing across ministries, with defined access models: free, paid, and restricted.
- Highlight on open data as public digital infrastructure enabling transparency and innovation but also risk of symbolic tokenism or asymmetric extraction if poorly regulated.
- Warning that in the absence of domestic capacity and regulatory control, open data can fuel digital inequality and offshored value capture by foreign AI firms.
- Advocated for regulatory frameworks ensuring: clear purpose, privacy, anonymization, informed consent, agency, enforcement, and grievance redressal.
- Data sovereignty is essential—public data should nurture domestic research, startups, and digital expertise, with national law prevailing over foreign commitments.
- Cross-border data flows remain critical for global cooperation, but must not undermine domestic regulatory capacity or development priorities.
- Reference to global trends: G20 New Delhi Leaders’ Declaration and UN Global Digital Compact emphasize digital public infrastructure, trust, and local capacity-building.
- Recognized India’s leadership in digital public infrastructure (India Stack), complementing EU’s approach to regulation—suggesting the global south is shaping new governance paradigms.
Invest India: Fireside Chat
The fireside chat at the India AI Impact Summit 2026, featuring Vinod Khosla and moderated by Niraj Ray, navigated the immense opportunities, challenges, and imperatives for India in the global AI revolution. The dialogue contextualized AI within the broader technology lifecycle: from capital-intensive beginnings to potential utility-level maturity. Khosla emphasized that infrastructure investment in AI is not only justified, but necessary for widespread deployment, predicting capabilities will far exceed expectations in just a few years. He warned, however, that the real barriers to AI adoption are political and societal, referencing irrational regulations that inhibit progress. For India, Khosla advocated a strategic AI push centered on public good: deploying free AI doctors, tutors, and agricultural experts for rural citizens via digital platforms, to build wide acceptance before disruptive workforce transitions. The business discussion lauded Indian AI startups like Emergent and Sarbam for pioneering high-impact, inclusive solutions, especially for non-technical and older users. Technologically, they highlighted the pressing compute and energy constraints, outlining a vision for massive efficiency improvements – where reduced training costs and radically cheaper inference could accelerate adoption. Both panelists agreed that India's focus must be on building capacity, capability, and consumption simultaneously, with disciplined capital, compute sovereignty, and public trust at the core.
- AI infrastructure investment, though capital-intensive, is justified and essential for India’s global competitiveness.
- Semiconductor and data center energy consumption is a major concern: global data centers use 80 GW (1% of world capacity), projected to double in three years.
- Hardware supply chains are highly concentrated; memory chips (HBM) are 80% sourced from just three companies, and annual capacity falls short of demand.
- AI is now seen as a strategic national asset, like nuclear capability, driving sovereign investment (e.g., Middle East sovereign funds).
- India’s AI push should begin with public services: Aadhaar-based AI doctors, tutors (e.g., CK12 for 4–5 million rural students), and agricultural advisors, to deliver immediate societal benefits.
- Success of Indian AI startups: Emergent named world’s fastest-growing software company (8 months old), and Sarbam’s sovereign Indian language models log 1 million minutes/day.
- AI is unleashing entrepreneurship among older, non-technical Indians, democratizing access to business creation.
- Venod Khosla forecasts AI capability in 4-5 years will far outpace current expectations; AI inference cost has decreased 1,000-fold in 18 months and may drop 100-1,000-fold again in two years.
- Future adoption and return on AI investment hinge on political acceptance; public trust must be built before economic disruptions trigger backlash.
- Call for India to simultaneously build AI capacity, capability, and local consumption with disciplined capital allocation and a focus on compute sovereignty.
- Technological focus is shifting from hardware scaling (Moore’s Law) to data and compute efficiency in AI training and deployment.
- Policy caution: India must avoid regulatory pitfalls seen elsewhere (e.g., Germany's restrictive robot laws inhibiting productivity gains).
The Role of Government and Innovators in Citizen-Centric AI
The session at the India AI Impact Summit 2026 featured a high-profile panel of European and Indian leaders, AI entrepreneurs, and key policy makers discussing strategies to accelerate the adoption of AI—particularly large language models (LLMs)—in the public sector. Key topics included building AI capacity through collaboration between India, the EU, and the Global South; addressing linguistic diversity using AI-driven translation tools; leveraging supercomputing resources and AI ‘factories’ for nationwide transformation; and the challenge of translating technological investment into real productivity gains, as illustrated by the Solow Paradox. Panelists provided real-world examples, such as job matching initiatives in France using AI and the development of digital twins for climate modeling, underscoring the potential of open, accessible, and culturally nuanced AI solutions to improve governmental efficiency and citizen service delivery. The need for co-creation with local research and deployment, as well as sustained training and change management for civil servants, was highlighted as crucial to overcoming adoption barriers and unlocking tangible benefits for society.
- India-EU collaboration was advocated to build global AI capacity and benefit the global south.
- Mistral (EU AI company) described its 'AI for Citizen' program, focusing on deploying LLMs for public sector efficiency, such as automating legacy administrative processes and enhancing citizen support (e.g., job matching at France’s employment agency).
- Mistral also emphasized partnerships with local research labs to ensure cultural and linguistic relevance in AI deployments (examples: Morocco for dialects, Singapore for Southeast Asian languages).
- DeepL (German AI translation platform) highlighted AI’s role in bridging multilingualism for public administration, enabling legislation translation and real-time government-citizen communications across many languages.
- The Barcelona Supercomputing Center outlined ‘AI Factories’—physical and digital platforms for AI development and technology transfer, staffed with AI experts, designed for free and open access by public entities.
- Europe now hosts 6 of the world’s top 15 supercomputers, with strategic investments in EuroHPC to support scalable AI applications.
- European Commission's Roberto Viola showcased accessible public AI tools (e.g., Mistral, DeepL, 'Destination Earth' climate AI twin) as examples of citizen-facing innovation.
- Addressed the ‘Solow Paradox’: despite increased investment in IT and AI, productivity gains often lag due to overlapping old and new processes; true impact requires process reengineering and user empowerment.
- European Investment Bank study cited a precise 4% productivity gain from AI adoption in the public sector—modest but a move beyond zero-growth trends, especially with generative AI.
- Panelists agreed that AI’s transformative public sector impact depends on effective integration, change management, and widespread civil servant training.
Inclusive AI: Why Linguistic Diversity Matters
The session at the India AI Impact Summit 2026 showcased a groundbreaking collaboration between Bhashini and Current AI, resulting in the unveiling of India's first national, open-source, multilingual, handheld AI hardware device. This device is specifically designed to operate offline, prioritize user privacy, and support 22 Indic languages through 350 language models. The product demonstration illustrated its broad applicability, particularly for the visually impaired, enabling real-time inference in users' native languages even in zero-connectivity environments. Developed over just five to six weeks, this prototype exemplifies how open, collaborative, and locally-rooted AI can deliver tangible societal benefits and bridge India's linguistic diversity gap. Both Bhashini and Current AI emphasized their commitment to public-interest technology, with Bhashini detailing its journey from grassroots data collection to now facilitating about 15 million daily AI inferences. The session underscored the global vision of building open, collaborative AI systems that empower local communities and preserve linguistic diversity, inviting innovators to contribute to the ongoing platform development.
- Launch of an open-source, multilingual handheld AI hardware device supporting 22 Indic languages and operating fully offline.
- The device integrates privacy-preserving AI, automatic speech recognition (ASR), neural machine translation (NMT), and text-to-speech (TTS) modules, all quantized for high-efficiency local inference.
- Developed in just 5-6 weeks through a collaboration between Bhashini (an Indian public platform with 350 AI models) and Current AI (focused on global public-interest AI).
- Product demonstration showcased use cases such as vision assistance for visually impaired users in their native languages.
- Bhashini now runs 15 million AI inferences per day using a 200 GPU system, with real-time monitoring of use and performance.
- The hardware is currently powered by NVIDIA Jetson but is platform-agnostic and designed for easy deployment of multiple custom models.
- Both partners stressed open-source, public good, and collaborative development models as core to their strategy.
- Call to innovators and communities to build AI solutions for their own languages and needs using this open platform.
AI Meets Cybersecurity: Trust, Governance & Global Security
This session at the India AI Impact Summit 2026 focused on the urgent need to ground AI cybersecurity in concrete policy choices that prioritize human rights. Panelists discussed the risks posed by agentic AI systems—such as those that can autonomously act on behalf of a user—and the way traditional cybersecurity frameworks, like the Confidentiality, Integrity, Availability (CIA) triad, must adapt in the age of AI. Real-world cases, such as the vulnerabilities exposed by tools like OpenClaw and Microsoft Recall, illustrated new forms of risk, notably those that can compromise end-to-end encryption and user privacy through the blurring of boundaries between applications and operating systems. Government, technology, and civil society representatives agreed that while the opportunities of agentic AI are vast, the risks—including escalating attack surfaces and potential catastrophic failures—require a multi-stakeholder, cross-disciplinary, and evidence-driven approach to governance. The discussion called for moving beyond hype, strengthening public trust, and instituting robust, human-centric structures for managing AI-driven cybersecurity threats before a major crisis or 'Chernobyl moment' forces reactive regulation.
- Emphasis that AI cybersecurity is fundamentally a human rights issue—not merely a technical one.
- Discussion framed by the Confidentiality-Integrity-Availability (CIA) triad, highlighting new challenges to confidentiality (privacy, encryption), integrity (information accuracy, democratic discourse), and availability (critical digital infrastructure).
- Panelists warned about the risks of agentic AI systems, citing OpenClaw and Microsoft Recall as recent examples that exposed agentic vulnerabilities and new attack vectors, such as prompt injection.
- Integration of agentic AI into core operating systems by major tech companies (Google, Apple, Microsoft) is accelerating without adequate adaptation of security paradigms.
- Agentic AI poses a direct threat to end-to-end encryption by enabling exfiltration of sensitive data in ways that bypass traditional security safeguards.
- Government representatives underscored the lag in international cybersecurity norms and enforcement versus the pace of technological advance, despite a decade of negotiations.
- Fragmentation in cyber/AI dialogue across sectors and disciplines stymies holistic, legitimate, and effective governance solutions. Cross-sector, expert-driven dialogue is required.
- A call to action to move beyond theoretical or alarmist discourse—focusing on evidence-based, practical, and human-centric AI-cybersecurity strategies.
- Caution against waiting for an AI 'Chernobyl moment'; proactive governance and regulation are necessary to maintain public trust and avoid crisis-driven responses.
Policymaker’s Guide to International AI Safety Coordination
The session at the India AI Impact Summit 2026 focused on the accelerating global race toward artificial general intelligence (AGI), the urgent need for robust international AI safety governance, and the unique opportunities and responsibilities for middle powers and the global majority in shaping AI’s future. Speakers highlighted the uneven pace between AI technological advancement and safety regulations, calling for more coordinated, practical, and globally inclusive risk management. The forum showcased key organizations—AI Safety Connect and the International Association for Safe and Ethical AI—aimed at fostering international dialogue and developing governance and technical solutions. Participants applauded recent milestones such as the Hiroshima International Code of Conduct and the OECD AI Principles, while emphasizing the need for further harmonization, information sharing (including coordinated incident reporting), and open-source safety tools. The session stressed that trust, inclusion, and consensus-driven governance are essential to ensure safe and ethical AI deployment worldwide, with middle powers and diverse stakeholders playing a pivotal role in moving beyond rhetoric to real-world impact.
- AI Safety Connect convenes bi-annual global gatherings (recently Paris, now India), engaging governments, industry, and academia to accelerate AI safety discussions.
- AI Safety Connect promotes inclusion of the global majority in frontier AI safety governance.
- Closed-door workshops resulted in policy and industry discussions, actionable outcomes to be published soon.
- International Association for Safe and Ethical AI (IASEAI) has grown to several thousand members and nearly 200 affiliates, holding its second international conference in Paris.
- OECD AI Principles (first adopted in 2019) now guide policies in 50 countries, setting the global baseline for trustworthy AI.
- The Hiroshima International Code of Conduct and its reporting framework have led to transparency; 25 organizations across nine countries submitted risk management reports.
- A call for an international AI incident response center, with the Global Partnership on AI piloting a cross-border incident reporting framework.
- OECD launched an open call for open-source safety and evaluation tools to support trustworthy AI implementation.
- Middle powers like Singapore are urged not to be passive; they can leverage market position, regulatory innovation, and diplomatic ties to help set safety norms.
- Effective international coordination and practical safety infrastructure, including information and tool sharing, are essential to bridging global governance gaps.
Building the Next Wave of AI: Responsible Frameworks & Standards
The panel session at the India AI Impact Summit 2026 centered on operationalizing responsible, ethical, and inclusive AI within the Indian context—emphasizing the need for holistic frameworks, practical safety benchmarks, and multi-stakeholder collaboration. Keynote speaker Mr. Stefani Naguran highlighted the launch of the RAISE Index, a novel tool co-developed by ICOM and The Dialogue, for quantifying and assessing AI risks across development and deployment phases; this index harmonizes requirements from leading global frameworks, is intended to be iterative and open, and specifically addresses India's challenges and opportunities at scale. Industry leader Miss Arunati Batara underscored the necessity of organizational and global compacts to counter AI misuse, showcasing Salesforce’s entrenched ethical review processes. Startup representative Karna detailed practical approaches for MSMEs—emphasizing the productization of governance and human-in-the-loop systems to ensure safe, scalable AI adoption. The session concluded by advocating for living, continuously updated benchmarks and broad stakeholder engagement to ensure AI’s societal benefits are realized safely, both within India and as a model for the developing world.
- Launch of the RAISE Index: A first-of-its-kind, open, and iterative AI safety and responsibility assessment tool developed by ICOM and The Dialogue for deployment and development phase evaluation.
- Telangana Data Exchange rolled out: A digital public infrastructure offering startups secure, sandboxed access to government datasets for validating AI solutions against real-world use cases.
- RAISE Index harmonizes leading global frameworks (EU AI Act, NIST, Singapore MASS, UK AI Assurance), enabling single-assessment, cross-jurisdictional compliance.
- Institutionalization of continuous learning and feedback into AI safety benchmarks, as opposed to static, one-time checklists.
- Salesforce highlighted early adoption of an Office of Humane and Ethical Use, reviewing all tech/products prior to market debut, since 2014.
- Call for a global compact and transparent information exchange to collectively prevent misuse of AI technology, especially in the age of deepfakes and AI-driven scams.
- Startups/MSMEs encouraged to embed governance directly into core products (productization), rather than relying on cumbersome compliance documents.
- Human-in-the-loop advocated as a core product feature—not a failure mode—to ensure accountability and trustworthy AI decisions.
- Demonstrated adoption at scale: Through productized governance, 30,000+ companies self-deployed AI agents for hiring via Blue Machines AI and Upna platforms.
Ethical AI: Keeping Humanity in the Loop While Innovating
The session, 'Humanity in the Loop: Sustaining Innovation and Ethics in the Age of AI,' was a UNESCO-sponsored panel at the India AI Impact Summit 2026, focusing on how to operationalize ethical, human-centric AI in a way that sustains innovation while safeguarding human rights and values. Key speakers included UNESCO leadership, global AI ethics scholars, business leaders, and policymakers from India and Europe. The discussion deconstructed the perceived trade-off between innovation and ethics, instead positing that embedding ethics into AI design fosters greater trust and wider adoption. Panelists emphasized early and ongoing integration of ethical oversight throughout the AI lifecycle, advocating for regulatory frameworks that are context-sensitive and based on risk. Drawing on lessons from pharmaceutical regulation and the EU AI Act, speakers argued that 'ethics by design' and risk-based exclusion of problematic AI use cases can build societal trust and accelerate responsible AI adoption, especially critical for diverse societies such as India and across the Global South. The consensus was that effective operationalization requires a multi-stakeholder and international approach, with continuous human accountability, systems for oversight, and clear standards for responsible AI development and deployment.
- UNESCO's 2021 Recommendation on the Ethics of Artificial Intelligence, adopted by 193 member states including India, provides the first global framework for ethical AI.
- The panel rejected the narrative of a conflict between innovation and ethics, emphasizing they are mutually reinforcing when ethics is integrated from the outset.
- UNESCO and industry leaders called for 'ethics by design,' where ethical principles are embedded from the earliest stages of AI system development.
- Industry experts highlighted the need to shift from treating regulation as an afterthought to making oversight an integral part of the AI development process.
- The European Parliament's perspective on the EU AI Act championed a risk-based regulatory approach—prohibiting certain high-risk AI use cases (e.g., predictive policing, emotional recognition in the workplace) while promoting transparency and trust.
- Panelists identified ongoing human accountability as crucial, asserting that current technology does not absolve humans from responsibility for AI outcomes.
- Better operationalization of ethical AI relies on sector-specific standards, multi-stakeholder collaboration, and ongoing assessment and iteration.
- Lack of clear frameworks has deterred wider AI adoption, with calls for transparency and trustworthy systems to unlock benefits, particularly in the public sector and for the Global South.
India’s AI Future: Sovereign Infrastructure and Innovation at Scale
The session at the India AI Impact Summit 2026 focused on building sovereign AI capabilities for India and the Global South, marked by the launch of a comprehensive 'Sovereign AI Research Report' from Amrit Vishu Vidyam. A distinguished panel—featuring leaders from HCL Software, Tata Communications, GenSpark AI, and prominent academic institutions—discussed the core challenges and imperatives in scaling India's AI journey. Key themes included addressing the critical shortage of GPU-based compute infrastructure, fostering robust and distributed data stacks, overcoming the production and adoption challenges of AI applications, and ensuring interoperability across AI systems. Announcements highlighted concrete expansion in sovereign cloud infrastructure, with thousands of Nvidia GPUs operational, Indian-specific models released, and imminent launches of edge-focused vector AI engines. The session emphasized that for India to lead in AI, a focus on digital infrastructure, data-centric architectures, production-oriented deployments, and ecosystem-wide collaboration with executive sponsorship is necessary.
- Release of the 'Sovereign AI Research Report' by Amrit Vishu Vidyam, with presence of senior representatives and academic leaders.
- Sunil Gupta (Y) announced migration of major government applications—including Bhashini—to sovereign Indian cloud, now running on thousands of Nvidia GPUs, supporting public launch of models like BOM-Server, BhatGen, and Sockets.
- He projected the future need for millions of GPUs for India as use cases and UPI-like mass adoption scale up, urging investments in abundant localized digital compute infrastructure.
- Kalyan Kumar (HCL Software) revealed upcoming April launch of a localized vector AI engine, designed for edge computing, and highlighted the strategic importance of modern data stacks, vector databases, and data contracts for AI readiness.
- Bruno Melo (Genspark AI) shared that Genspark crossed $200M ARR within ten months, identified India as the third largest market, and stressed that over 95% of enterprise AI pilots globally do not reach production due to lack of ROI visibility, fragmented compliance/data trust, and absence of executive sponsorship.
- Panelists agreed successful AI adoption demands cross-functional teams, measurable ROI frameworks, and interoperability at all layers—technical, data, and organizational.
- The need for interdisciplinary participation and open standards to promote meaningful collaboration was emphasized, citing remarks from India's Principal Scientific Adviser.
How Small AI Solutions Are Creating Big Social Change
The session at the India AI Impact Summit 2026 convened an eminent panel of global experts from organizations such as the Gates Foundation, Google Research Africa, Microsoft AI for Good, Parisante Campus, and the World Bank to discuss 'Small AI for Big Social Impact.' The discourse centered around the innovative deployment of small, data-efficient, and cost-effective AI models tailored for underserved and resource-constrained environments in the Global South and beyond. Panelists shared concrete examples from healthcare, agriculture, biodiversity monitoring, language data, and crisis management, emphasizing the need for AI solutions that are context-aware, locally relevant, and designed for scalability and accessibility—especially where infrastructure, compute power, or connectivity is limited. The conversation stressed the importance of collaboration with local partners, open-source approaches, and the shift from pilots to sustainable, large-scale deployments. Across all interventions, the guiding principle was the pragmatic development of AI that tangibly improves lives, focusing on inclusivity, equity, and meaningful impact for communities often overlooked by mainstream AI innovation.
- The panel defined 'small AI' as models that are data-efficient, affordable, able to run on edge devices, and meaningful to local contexts, particularly in underserved communities.
- The Gates Foundation’s Zamir Bray emphasized designing AI tools that work at the community level (e.g., district hospitals, smallholder farms), prioritizing fit for context over raw model benchmarks.
- Google Research Africa highlighted deployment of localized AI for 'weather nowcasting', critical for agriculture in Africa—achieving this despite the continent only having ~37 radar stations (vs. ~300 in North America/Europe). They also released an open-access dataset in 27 African voice languages, supporting accessibility and ecosystem growth.
- Microsoft’s AI for Good Lab shared open-source projects: 'Sparrow' for monitoring biodiversity in remote regions using edge devices and satellite connectivity, already deployed in multiple countries; and 'Alert California,' leveraging 1,300+ cameras and AI to facilitate early wildfire detection, with both solutions adaptable globally.
- Parisante Campus (France) provided healthcare case studies, where 'small AI' models perform tasks like analyzing X-rays and dermatological images on small, offline-compatible devices—demonstrating reliability and real-world deployment in clinical environments.
- The World Bank advocated for scaling proven small AI pilots in low-resource settings, with the aim to bridge from local pilots to national-level impact (e.g., in health, education, agriculture), emphasizing replication, plug-and-play models, and clear KPIs to drive systemic poverty reduction.
- Panelists advocated for open source, strategic partnerships, and problem-first design, opposing the indiscriminate use of large foundation models and encouraging homegrown, relevant AI innovation.
- Common themes included reducing inequality via AI, enabling inclusivity for low-resource regions (both Global South and North), and establishing a global movement around context-driven, small-scale AI innovations.
Survival Tech: Harnessing AI to Manage Global Climate Extremes
This session at the India AI Impact Summit 2026 convened leading experts from India's Ministry of Earth Sciences, disaster management, academia, industry, and venture capital to chart a path for AI-driven solutions in climate, sustainability, and disaster management. Discussions centered on the integration of AI and traditional physical modeling to address hyper-local weather and climate challenges, the need for hybrid models leveraging both spatial and temporal data, and the growing significance of AI in early warning systems, health, and sustainability. The panel called for leveraging India's vast troves of historical data, upskilling talent, fostering transfer learning across regions, and advancing small data fine-tuning for foundation models to enable precise, impactful AI applications. Emphasis was placed on collaboration across public, private, and academic sectors to create trusted, accessible, and context-specific AI solutions for climate resilience, disaster management, and societal benefit.
- Panelists include the Ministry of Earth Sciences secretary, NDMA secretary, leading AI researchers (UT Austin, IIT Roorkee, Atria University), Nvidia scientists, and industry/VC leaders.
- Announcement of IRO (Indian Research Organization) as a platform to develop original, small, agile AI models, emphasizing independence from large, pre-trained foundational models.
- Three priority verticals: earth science/disaster management, health, and pharma—citing strong partnerships, e.g., with Indian Pharma Alliance covering 80% of pharma output.
- Consensus on the necessity of hybrid approaches—blending physics-based numerical models with AI models, particularly for high-impact, hyperlocal weather prediction (e.g., cloudbursts).
- Emphasis on opportunities for AI to enhance early warning systems via integration of sensor, satellite, and multimodal data, aiming for trusted, low-cost, citizen-centric alerts.
- Recognition of the disruptive potential of generative AI and multimodal sensors (e.g., multispectral cameras, satellite imagery) to improve short-term weather forecasting.
- Call for AI-powered voice assistants to deliver actionable, personalized guidance for climate resilience, especially for farmers and vulnerable populations.
- Highlighting the need for small-data fine-tuning of foundation models, enabling AI deployment in data-scarce yet high-impact local contexts.
- Endorsement of transfer learning to apply insights from data-rich to data-poor regions, leveraging the universality of climate phenomena.
- India's strengths: extensive historical weather data (150+ years from IMD), demographic dividend of skilled young talent, and a collaborative ecosystem spanning government, academia, and industry.
- Key bottlenecks include error growth in traditional modeling, underutilization of available data and talent, and the need for integration of physical and AI-based models.
MedTech and AI Innovations in Public Health Systems
The session at the India AI Impact Summit 2026 focused on the transformative role of artificial intelligence (AI) in India's public health systems, across both government and private sector perspectives. Government officials highlighted the newly launched 'SAHI'—India's Strategy for Artificial Intelligence in Public Health—which aims to reduce costs, improve care coordination, and enhance operational efficiency, especially in rural and resource-constrained settings. Panelists emphasized population-scale AI deployments for tasks like diagnostic imaging, specialist teleconsultations, and digitizing longitudinal health records to support universal health coverage and reduce out-of-pocket expenses. Discussions also addressed the institutionalization of health tech innovation, stressing the need for problem-driven solutions, robust ground-level evidence, policy guardrails, and use-case libraries. Industry representatives from Tata MD and social impact organizations demonstrated AI-driven workflow improvements, clinical decision support, predictive analytics, and wellness scoring to strengthen public health intervention and preventive care. Finally, the session underscored challenges in integrating startups, data collection across diverse contexts, and translating dashboards into actionable interventions for frontline health workers, aiming to make AI an intrinsic part of the delivery system rather than a superficial add-on.
- Government of India launched SAHI (Strategy for Artificial Intelligence in Public Health) to drive AI adoption at population scale.
- AI is being deployed for diagnostic tasks like diabetic retinopathy and X-ray image interpretation, especially addressing specialist shortages in rural areas.
- Digital platforms like 'ejimni' enable teleconsultations between primary and tertiary care providers.
- AI aims to cut out-of-pocket expenditure and enhance public trust by improving public health system effectiveness.
- India draws parallels with UPI's revolution in finance, targeting a 'UHI moment' (Universal Health Infrastructure) built on digital public infrastructure in health.
- State-level examples (e.g., Andhra Pradesh) include articulated problem statements and applied technology centers to guide real-world AI innovation.
- Institutionalization rails include evidence-based evaluation of health tech, building use-case libraries, and clear AI policy/guardrails—especially around data sharing and monetization.
- Private sector (e.g., Tata MD) is supporting longitudinal digital health records, clinical decision support, and operational efficiency for medical officers and frontline workers.
- AI tools are being used to predict risks, create composite wellness scores, and prioritize high-risk patients for proactive and preventive care.
- Social impact organizations stress that most public health program failures result from variable implementation, not planning, and AI should transition dashboards from retrospective reporting to real-time, actionable guidance.
- Integration challenges remain: institutionalizing startups into the public system, capturing meaningful data across geography/language, and ensuring AI tools are embedded in workflows.
AI Transformation in Practice: Insights from India’s Consulting Leaders
This session at the India AI Impact Summit 2026 featured leaders from Deoid and PwC, who discussed the transformative impact of AI on consulting business models, workforce structures, and enterprise workflows. Both emphasized AI as not just a tool for optimization but as a force for reimagining core business offerings and market access, including the previously untapped MSME sector. Key examples included automated tools for auditing and tax, digital marketing platforms for MSMEs, and advanced simulation for automotive and healthcare. The speakers highlighted their substantial investments in AI, workforce upskilling, and the democratization of innovation within the firm. However, challenges persist in AI adoption, particularly change management, data governance, and achieving measurable ROI at an enterprise scale, with only 12% of companies reporting clear top- and bottom-line benefits. Ultimately, the session underscored the need for human-in-the-loop approaches, evolving workforce skills, and robust change management to realize AI's full potential in large organizations.
- PwC committed almost $1 billion to AI in 2023 for platform partnerships and workforce upskilling.
- Consulting business models are being inverted: AI enables shifting from one consultant handling 10 clients to one consultant managing work for 10 clients with 80% of tasks performed by AI.
- AI-driven tools have saved 60,000 hours per quarter in audit operations by automating confirmation of balances.
- PwC developed 'Chat PwC' for internal efficiency and 'Navigate Tax,' an AI-driven tax advisory tool, launched six to seven months ago.
- A digital marketing platform for MSMEs now allows campaign launches across major social channels in minutes through simple prompts, democratizing access to marketing.
- Workforce structure is evolving, with middle management roles potentially shrinking, new skills (critical thinking, judgment, empathy) rising in importance, and the need for roles blending humans and machines.
- Only 12% of global corporations report achieving both topline (revenue) and bottom-line (profit) gains from AI investments, indicating ongoing ROI challenges.
- Data governance and security remain major barriers: examples cited include unintended data leaks via vendors’ use of generative AI tools.
- Change management and integration are the biggest hurdles in scaling AI from pilot projects to full enterprise adoption.
The Role of Government and Innovators in Citizen-Centric AI
The session at the India AI Impact Summit 2026 focused on the urgent need for global coordination in AI safety and governance as artificial intelligence technologies advance at an unprecedented pace. The AI Safety Connect initiative, now in its second year and emphasizing inclusivity, has been convening global actors—including governments, industry, academia, and civil society—for regular, high-level dialogue and capacity building. Key highlights included the broadening of AI governance discussions beyond superpowers to the 'global majority,' advocating for a more balanced international influence, and emphasizing the importance of collective norm-setting and capacity building for frontier AI safety. Senior leaders from the OECD, Singapore, Malaysia, World Bank, and international industry and academic organizations participated, with forward-looking discussions on developing binding and operational governance mechanisms. Major reference points included the OECD principles, the Hiroshima International Code of Conduct, the new International AI Safety Report, and the Singapore Consensus on research priorities. Building trust, inclusivity, evidence-based policy, and international incident response mechanisms were repeatedly stressed as foundational to meaningful and safe AI deployment worldwide.
- AI Safety Connect series, co-founded by Sares and Nicola, convened its latest event in India after Paris, with the next planned for Switzerland, and biannual global gatherings (including at the UN General Assembly).
- Initiative emphasizes global majority engagement in AI safety, moving the discussion beyond traditional superpowers and aiming for common-sense risk management and concrete governance solutions.
- Partner organizations include the International Association for Safe and Ethical AI (ICI), the Digital Empowerment Foundation, Sympatico Ventures, the Future of Life Institute, and the Mindu Foundation.
- The event featured high-level participation from the OECD Secretary General, ministers from Singapore and Malaysia, the World Bank VP for Digital and AI, and AI industry and research leaders.
- Key frameworks referenced: OECD AI principles, Hiroshima International Code of Conduct, International AI Safety Report, and the Singapore Consensus on Global AI Safety Research Priorities.
- Consensus that AI governance must bridge technical safety challenges and global governance gaps, with collective input from governments, companies, civil society, and technical experts.
- Highlighted the critical need for more inclusive, objective, and binding international governance—suggestions included establishing an international AI incident response center.
- Stressed that policy makers from non-superpower nations ('middle powers' and the global majority) have a pivotal role through coordinated action, pooled resources, and regulatory innovation.
- Announced plans to publish the results of closed-door scientific dialogues with senior industry leaders focused on shared AI safety responsibility.
Building the Future: STPI Global Partnerships & Startup Felicitation 2026
The session at the India AI Impact Summit 2026, hosted by Software Technology Parks of India (STPI), brought together government officials, industry leaders, and startup ecosystem stakeholders to discuss scaling innovation and building a robust AI startup ecosystem. The discussion emphasized the pivotal role of AI as a transformative technology, faster and more disruptive than previous waves like electricity and internet adoption. STPI highlighted its initiatives, such as establishing 24 Centers of Entrepreneurship (several focused on AI), launching next-generation incubation schemes, and the Sanyuj portal—a one-stop platform for startups, mentors, investors, and government agencies. The critical role of Global Capability Centers (GCCs) in India's emergence as a global AI and innovation hub was underlined, with projections of 3,500+ GCCs, $150 billion in software exports, and 3.5 million employees by 2030. Industry speakers stressed that although experimentation with AI is abundant, large-scale enterprise integration remains limited. The main bottleneck lies not in technological capability but in operational readiness and access to enterprise validation and infrastructure. The co-creation model, where startups collaborate closely with GCCs, was flagged as essential for reducing the time from pilot to production, enabling startups to achieve global scale, and increasing economic, employment, and innovation multipliers for India.
- STPI has set up 24 Centers of Entrepreneurship across India, several dedicated to AI.
- Recently concluded Next Generation Incubation Scheme provided financial and operational support to startups.
- Sanyuj portal launched as an integrated community, resource repository, and marketplace for startups, investors, academia, and government.
- By 2030, India expected to have over 3,500 GCCs contributing $150B in software exports and generating 3.5 million jobs.
- Global AI is projected to contribute $15.7 trillion to the world economy by 2030, with $5 trillion in productivity gains.
- Over 50% of Indian GCCs now focus on R&D, shifting from cost/labor centers to strategic innovation hubs.
- Major challenge for AI startups is enterprise validation and scaling, not technological gaps.
- STPI enables access to enterprise-grade platforms, capital, and co-creation opportunities for startups.
- GCCs act as a bridge for Indian AI startups into global organizations, facilitating market access, compliance, and production-grade adoption.
- Co-creation models reduce the pilot-to-production cycle and strengthen India’s position as a global AI talent and innovation center.
Shaping the Future: AI Strategies for Jobs and Economic Development
The session at the India AI Impact Summit 2026, moderated by Tish P. Chopra (founder and CEO of industry.ai), brought together leaders from industry, government, and the public sector to address AI-driven strategies for workforce and economic growth. The discussion focused on the unique challenges and opportunities of adopting AI within the context of India's 70 million MSMEs, which employ over 230 million people, and broader emerging market dynamics shared with ASEAN and regions like Guyana. Key highlights included ASEAN's imminent Digital Economic Framework Agreement (DEFA), which aims to digitally interconnect 11 countries to double the region's digital economy from $300 billion to $1 trillion by 2030, prioritizing inclusion for Least Developed Countries (LDCs). Dr. Mahindra Karpen of Guyana emphasized how AI-enabled healthcare and digital infrastructure (notably through remote telemedicine and Starlink connectivity) offer transformative potential in underserved communities, laying groundwork for sustainable development. The session also underscored the growing demands on global energy infrastructure, projecting a fourfold increase to support AI data centers, necessitating at least $4 trillion in capital annually. Across all speakers, there was alignment on the urgency for responsible, inclusive, and context-specific AI adoption, prioritizing scalable solutions for MSMEs and cross-border collaboration.
- India's 70 million MSMEs employ 230 million people, produce 30% of GDP, and account for 50% of exports—underscoring their centrality to AI-driven economic transformation.
- ASEAN is finalizing the Digital Economic Framework Agreement (DEFA), set to be the world's largest legally binding digital agreement, integrating 11 countries (700 million people) for interoperability and growth.
- DEFA is projected to double ASEAN's digital economy from $300 billion today to $1 trillion by 2030, with disproportionate benefits for LDCs within the bloc.
- Guyana is leveraging AI for healthcare transformation, especially via telemedicine in remote areas, now enabled at over 200 sites using Starlink connectivity to reach Indigenous populations.
- AI is also being deployed in Guyana for primary healthcare, inventory management, public sector digitalization, and agriculture—positioning the country as a hub for international investment.
- Projected global energy demand could increase fourfold in the next decade to support AI and data center growth, requiring $4 trillion in investment per year.
- The session stressed the need for inclusive, sustainable, and responsible AI adoption, tailored to emerging markets and MSMEs, with India and ASEAN positioned as global leaders in context-driven AI deployment.
From Human Potential to Global Impact: Qualcomm’s AI for All Workshop
Dura Maladi, EVP and General Manager at Qualcomm Technologies, delivered a comprehensive keynote at the India AI Impact Summit 2026, focusing on the transformative role of edge AI, evolving user interfaces, and Qualcomm's strategies for democratizing AI development. Maladi highlighted significant advancements in AI models, noting a trend toward smaller, more efficient multimodal models with superior performance and enhanced on-device capabilities. He emphasized the growing importance of edge AI for personal and enterprise use cases, driven by data privacy, latency, and connectivity independence. Concrete examples, such as AI-first phones and hybrid AI infrastructures, illustrated the market shift from cloud-centric to distributed computing. Maladi also showcased the Qualcomm AI Hub, designed to streamline developer onboarding and application deployment across devices, as well as Qualcomm's commitment to energy-efficient high-performance computing in data centers. The session underscored the convergence of hardware, models, and platforms, positioning Qualcomm as a pivotal enabler in the next phase of AI-driven user experiences.
- AI models have become dramatically smaller (from 175 billion to 7-8 billion parameters) while increasing in quality and efficiency.
- On-device AI can now run large models (up to 10 billion parameters on smartphones, 30 billion on PCs), reducing reliance on cloud connectivity and enhancing privacy.
- Multimodal small language models (SLMs) offer longer context windows, on-device learning, and improved reasoning.
- AI-first consumer devices, such as a newly launched phone in China, feature agents as the primary user interface, with traditional apps running in the background.
- Hybrid AI systems enable seamless switching between device, edge, and cloud processing depending on task complexity and user needs.
- Qualcomm's AI Hub offers developers free, cloud-native access to device forms for model testing and application deployment, simplifying the developer journey.
- Qualcomm focuses on energy-efficient high-performance computing in data centers, distinguishing architectures for model training and inference.
- The evolution of AI as the new user interface encompasses voice, touch, text, and sensors, culminating in personalized and context-aware AI agents.
- Case study: 'Humane PC' launched in Saudi Arabia allows real-time decision-making on where to process user queries (device vs cloud).
Transforming Agriculture: AI for Resilient and Inclusive Food Systems
The session at the India AI Impact Summit 2026 focused on how artificial intelligence (AI) and digitalization can transform global food systems, making them more transparent, responsible, inclusive, and resilient. With leadership from India, the Netherlands, and Indonesia, the panel brought together stakeholders from governments, industry, academia, and international organizations to discuss both the opportunities of AI in agriculture and the significant challenges around adoption, data governance, infrastructure, and digital divides. Notable achievements and innovations were highlighted—such as dramatic reductions in water and pesticide use thanks to AI-based precision farming, and accelerated climate-resilient crop breeding. The Netherlands emphasized its global role in agricultural innovation and willingness to collaborate internationally to make AI solutions widely accessible, especially for smallholder farmers. The OECD underscored the promise of AI in optimizing resources and increasing yields but cautioned that adoption is uneven, with digital divides threatening to exacerbate inequalities. Both the Dutch and OECD representatives stressed the urgent need for robust policy frameworks, investment in connectivity and skills, responsible data use, and strong international cooperation to ensure AI's benefits reach the most vulnerable actors in the food chain.
- Panel co-chaired by India, the Netherlands, and Indonesia, focusing on AI's role in economic growth and social good within food systems.
- AI-enabled precision agriculture in the Netherlands achieves up to 90% water savings and 30% reduction in pesticide use without yield loss.
- OECD found a digital adoption gap: 96% of Australian farmers use digital tools versus 12% in Chile, highlighting urgent digital divide concerns.
- AI applications cited include early detection of crop diseases and climate threats, improved traceability, market transparency, and smart logistics.
- OECD’s new AI policy toolkit to provide context-specific guidance for countries, building on a global database covering 2,000 policies in 80+ jurisdictions.
- Support for inclusive AI aimed at empowering smallholders, women, and remote-area farmers to access global markets and share in AI’s benefits.
- Dutch investments in capacity building and co-creation with local partners, stressing that solutions must be tailored to unique national contexts.
- Barriers to AI adoption identified: high costs, limited digital skills, uneven infrastructure, fragmented data governance, and lack of trust.
- OECD pushing for responsible AI and interoperability to support resilient cross-border food supply chains and minimize environmental impact.
- International cooperation and knowledge sharing, especially through OECD and global partnerships, highlighted as essential for scaling trustworthy AI.
Science, AI & Innovation: India–Japan Collaboration Showcase
This session at the India AI Impact Summit 2026 featured social innovators and government representatives sharing actionable insights about AI's role in democratizing access to social welfare, education, and innovation infrastructure. Kitika Sanani from Intersection described the journey of leveraging AI and digital public goods to streamline Right to Education (RTE) admissions, reducing a previously burdensome ten-step process to a single-touch system and increasing equitable access for vulnerable children. The solution scaled from 196 admissions to over 900,000 in ten years, aiming for rapid acceleration towards covering the annual 2 million entitlement under RTE. The AI-powered, multilingual WhatsApp chatbot is enhancing targeting and reducing frontline worker burdens, while plans are underway to create a 'United Entitlements Interface'—a UPI-inspired single gateway for accessing multiple constitutional rights and welfare benefits. Manu from the federal Atal Innovation Mission described initiatives to embed AI and frontier technologies at scale, connecting education to high-tech startups, and building innovation ecosystems across states, with emphasis on social capital and public service delivery improvements. The discussion highlighted AI's transformative potential for deepening impact, simplifying access, and scaling up solutions across sectors, positioning India as a leader in tech-enabled social inclusion.
- Intersection has worked with 18 state governments and national ministries to embed AI and digitization into welfare delivery, targeting vulnerable populations.
- Their AI-powered, modular RTE Management Information System has enabled the leap from 196 RTE admissions (2013-14) to over 900,000 children now, with a goal to reach 2 million per year (20 lakh seats) and current coverage rising from 30% to 60%.
- The WhatsApp multilingual chatbot, powered by AI, eases application processes for parents and assists frontline workers, further improving targeting and access.
- Intersection is developing a 'United Entitlements Interface,' a digital public good platform inspired by UPI and DigiLocker, to enable citizens to check eligibility and apply for multiple constitutional rights in one place.
- Atal Innovation Mission is building state innovation missions across India to bridge disparity, embed AI and frontier technologies, and foster grassroots impact beyond valuation-focused unicorns.
- Startups, MSMEs, and nonprofits are leveraging AI as a supercharged tool to solve real-world problems, not just for economic growth but also to build social capital.
- India's unique gig economy landscape and the role of tech-enabled startups in formalizing labor markets were recognized as significant social shifts influenced by AI.
- Policy-makers stressed the importance of AI for public service delivery, emphasizing impact at scale and ecosystem catalyzation, particularly in lagging regions.
Building Population-Scale Digital Public Infrastructure for AI
The session at the India AI Impact Summit 2026 focused on the concept of 'diffusion pathways'—structured approaches to ensure the rapid, safe, and inclusive spread of AI technology, particularly for public good across different countries and sectors. The discussion cited real-world examples of accelerated AI deployment, such as agriculture app rollouts in India, Ethiopia, and with Amul, demonstrating significant reductions in implementation time through shared experience and knowledge transfer. The coalition announced an ambitious goal: 100 AI diffusion pathways by 2030, supported by a diverse set of global partners (including Google, Gates Foundation, UNDP, and others), aiming to build institutional capacity and compress the learning curve for AI adoption. Panelists highlighted that the key to scalable and sustainable AI impact lies not just in technological innovation but in addressing bottlenecks like fragmentation, procurement, cultural change within government, and the importance of contextual, iterative, and locally relevant solutions. Global health, education, and public service cases were referenced as areas where scaling AI beyond pilots to lasting systems requires institutional pathways and supportive state transformation, especially around procurement and digital public infrastructure.
- 2.5 million farmers have downloaded an AI-powered app providing crucial information on prices and weather.
- Initial deployment in Maharashtra took 9 months; the same tech was deployed in Ethiopia in 3 months and in Amul's dairy initiative in 3 weeks, illustrating dramatic acceleration from shared 'pathways.'
- The session announced a coalition to establish 100 AI diffusion pathways by 2030 to enable rapid, equitable adoption of positive AI use cases worldwide.
- Coalition partners include Google, Gates Foundation, UNDP, among others, inviting open participation from all stakeholders.
- Diffusion is defined as spreading knowhow, trust, and institutional capability—not just technology access—enabling sustainable AI integration at national and global levels.
- Key barriers to AI scaling include fragmented pilots, procurement policies prioritizing low risk over innovation, and the need for digital infrastructure and outcome-focused public sector transformation.
- India is now the second-largest user base for Anthropic’s Claude AI outside the US.
- Success factors for diffusion: contextualization (local language and work context), workflow integration, and iterative, inclusive approaches.
- Gates Foundation is investing in 'scaling hubs' in India and Africa to aggregate and channel AI innovation for systemic scale-up, not just isolated pilots.
- Brazil's Ministry of Management and Innovation shared institutional reforms—creating a special secretary for state transformation and shifting procurement from process/risk minimization to innovation/outcome orientation.
AI, Algorithms, and the Future of Global Diplomacy
This AI Impact Summit session brought together leading voices from Germany's government, policy foundations, and Indian think tanks to dissect the evolving role of AI in diplomacy, foreign policy, and international cooperation. Key German officials outlined the rapid establishment of AI and data labs across all federal ministries, emphasizing agile, user-driven co-creation as the ideal model for AI deployment within state institutions. At a macro level, panelists examined how AI is both a new technological revolution and a continuation of historic patterns where emerging technologies reshape global power dynamics. India and Germany were repeatedly identified as 'middle powers'—not at the AI research frontier like the US or China but possessing unique assets—India's tech workforce and contextual innovation, Germany's industrial data and regulatory expertise—that could be leveraged for impactful sectoral applications. Concrete opportunities for Indo-German collaboration were flagged in areas such as healthcare AI (where India offers massive data and Germany capital for investment), industrial AI, and automation. The conversation stressed the strategic advantages of open-source AI development, highlighted India's growing focus on practical use cases and inclusive deployment, and concluded that middle powers can carve out distinctive and influential AI roles through partnerships and sectoral leadership rather than direct competition with superpowers.
- Since 2022, 16 data and AI labs have been established across all German federal ministries, including the Foreign Office, accelerating agile AI adoption within government.
- Germany is prioritizing open-source AI solutions—both models and surrounding infrastructure—for diplomatic applications, emphasizing reusability and security.
- AI is positioned as a tool reshaping diplomatic work, enabling faster document analysis and negotiation support for diplomats.
- Panelists framed the rise of AI as a new chapter in tech-driven diplomacy, akin to the industrial, nuclear, and space revolutions.
- India and Germany are identified as 'middle powers'—not leaders in developing 'frontier' AI models, but influential in innovation, regulations, industrial adoption, and deployment.
- India has recently launched 14 indigenous language AI models over 14 days, signaling rapid grassroots innovation and deployment.
- Potential high-impact Indo-German cooperation is identified in healthcare (where India offers vast datasets and Germany finances/expertise) and industrial AI (Germany's automation/data and India's tech capabilities).
- The Summit's rebranding from 'AI Action' to 'AI Impact' (from the French to Indian presidencies) signals India's aim to lead in AI deployment and sectoral impacts.
- Open-source approaches and focus on real-world AI applications are positioned as strategic, cost-effective ways for middle powers to achieve rapid and context-appropriate innovation.
- There is growing middle-power collaboration advocacy, as highlighted by recent international calls (e.g., Canadian PM), showing recognition of a multipolar approach to global AI governance.
How the Global South Is Accelerating AI Adoption: Finance Sector Insights
The session at the India AI Impact Summit 2026 explored the critical transition of AI in finance from experimental, frontier technologies to responsible, scalable, institutionally embedded systems. Key leaders from RBI, JP Morgan Chase, and India’s fintech ecosystem discussed the paramount importance of legitimacy, trust, strong internal governance, and principled regulation over pure model performance. The Reserve Bank of India highlighted its 'tech-neutral' approach, focusing on enabling innovation while strengthening consumer protection, accountability, and board-level governance for AI adoption in financial services. JP Morgan Chase emphasized the maturity of AI deployment across compliance, fraud mitigation, and operational efficiency, underscoring the exportability of India’s regulatory principles across sectors. India's burgeoning fintech sector sees AI as a strategic lever for productivity, financial inclusion, and engagement, while instituting best practices like human-in-the-loop safeguards and robust data protection. The RBI's recently published seven 'sutras' or guiding principles for AI adoption—now adopted across Indian government sectors—stand out as a framework to balance innovation with risk management, aiming to unlock AI's broad societal benefits through trustworthy deployment.
- India’s financial sector is prioritizing responsible AI adoption with a focus on innovation enablement alongside risk mitigation.
- The Reserve Bank of India (RBI) has adopted a 'tech-neutral' and 'tech-agnostic' regulatory stance to accommodate evolving technologies.
- Existing regulatory frameworks (e.g. consumer protection, IT outsourcing, audit guidelines) are being extended with AI-specific incremental guidance.
- The RBI’s new report introduces seven 'sutras' (principles) for AI lifecycle and risk governance, adopted by the Government of India across sectors.
- Liability and accountability for AI outcomes are to rest with regulated entities (model deployers) rather than developers.
- Supervision, internal audit, and assurance frameworks are being updated to better manage AI’s incremental risks.
- AI's core strategic advantages in Indian finance include better operational efficiency (e.g. saving $60-100B yearly in OPEX), overcoming 'thin file' underwriting challenges, and boosting reach via multi-lingual conversational interfaces.
- Best practices identified include keeping a 'human in the loop' for high-impact financial decisions and strict data protection (DPDP compliance).
- JP Morgan Chase confirmed extensive use of AI in fraud/scam detection, payments, compliance, and highlighted the global relevance of India’s principles-based approach.
- Panelists agreed that while AI capability is becoming commoditized, institutional legitimacy and trust are the true differentiators for successful deployment.
Building Scalable AI Through Global South Partnerships
During the session at the India AI Impact Summit 2026, Sunil Vadwani, co-founder of the Vadwani Institute for Artificial Intelligence, shared the journey of his institute in applying AI to address pressing societal challenges in India and beyond. Initially facing obstacles in scaling impactful AI solutions, Vadwani described how close collaboration with government agencies, integration with public digital infrastructure, and a user-focused approach transformed their ability to drive broad-based change. Key successes include AI-driven tuberculosis detection—now a national standard in India—AI-powered sputum analysis, and algorithms predicting TB medication non-adherence, as well as personalized AI tools tackling early-grade reading proficiency to reduce school dropouts. With their solutions now impacting over 100 million people in India and expanding to other countries in the Global South, the institute exemplifies how AI, when coupled with partnerships and a systems-level approach, can achieve scalable social impact. This aligns with new commitments such as the Gates Foundation's 'Advantage India for AI' initiative and Prime Minister Modi’s vision of 'Design and Deliver India for the World.'
- Vadwani Institute for Artificial Intelligence was launched in India in 2018 to focus on socially impactful AI solutions.
- AI for tuberculosis: Developed cough-based TB detection using smartphones (now national standard), AI-powered rapid sputum analysis, and predictive tools for medication adherence—collectively increasing TB detection rates by 25% and benefiting tens of millions.
- AI in education: Developed adaptive, AI-driven reading proficiency tools for young children, now mandatory for 3 million students in one state and adopted by others to address dropout rates.
- Key operational lessons: Early and deep government partnership, planning for scale from the outset, integration with existing government digital platforms, and designing AI tools that make frontline workers' lives easier.
- AI solutions and digital infrastructure integrated with Indian government platforms like NIKshai (TB) and Rakshak (education), enabling large-scale deployment.
- Institute’s reach: Over 25 AI solutions delivered, impacting 100 million people yearly in India, with a target of 500 million by 2030.
- Expanding to the Global South: Partners and pilots starting in Rwanda, Ethiopia, Kenya, with inquiries from multiple other countries.
- Capacity building: Training civil servants and ministries on responsible AI, data governance, and deployment.
- Announced partnership with Bill & Melinda Gates Foundation's new 'Advantage India for AI' initiative to invest in AI for the Global South.
- Echoes Indian government's strategic shift: PM Modi’s call to 'design, develop, and deliver' India-centric AI solutions for the world.
The Future of Public Safety: AI-Powered Citizen-Centric Policing in India
This session showcased the transformative role of AI-powered language tools in democratizing rural governance in India. The Ministry of Panchayati Raj, traditionally constrained by linguistic diversity and manual record-keeping, has implemented digital platforms like ewaraj and e-Amrit Suraj for financial tracking, and, critically, leveraged AI solutions such as Bhashini and Sabasar to overcome language barriers and automate meeting documentation. The Sabasar tool, launched in August 2025, enables automated transcription and summarization of Gram Sabha meetings in local languages, leading to widespread adoption and improved transparency. Integration with schemes like Swamitva has unlocked additional use cases, such as mapping solar potential of village rooftops using drone data. The panel emphasized the expanded inclusivity, participation, and accountability in rural decision-making, highlighting structural shifts in record-keeping and the promise of localized AI adoption with ongoing language expansion efforts.
- AI tools like Bhashini and Sabasar are being used to overcome language barriers and automate the documentation of over 1 lakh (115,115 as of February 2026) Gram Sabha meetings.
- The Sabasar tool, launched on August 14, 2025, uses AI-powered Automatic Speech Recognition (ASR) to convert meeting recordings into draft minutes in local languages, significantly reducing administrative burden.
- Integration of the Swamitva drone survey data with AI has enabled mapping of rooftop solarization potential in 2.38 lakh Gram Panchayats, which is further tied to government schemes for incentives.
- Digital financial platforms such as ewaraj and e-Amrit Suraj now ensure end-to-end online tracking from planning to payment, increasing transparency and accessibility.
- Structural constraints, such as limited language support, are being addressed by adding 11 more Indian languages and dialects to AI models (including Assamese, Boro, Maithili, Santal, etc.), with the help of state governments.
- Surveys indicated 65% of Panchayat secretaries' time was spent on conducting and recording meetings; AI documentation tools have alleviated this burden.
- States leading in Sabasar adoption include Odisha, Tamil Nadu, and Tripura, which are developing further tools for tracking activities post-meeting.
- The consultative, citizen-focused approach extends inclusivity to 900+ million rural residents and encourages active participation and oversight by citizens, including those living away from their villages.
- The Ministry did not require Panchayats to make new investments for AI adoption; existing mobile phones suffice for participation.
- The framework benefits from India's digital public infrastructure, including Bhashini language models and GPU access under national missions.
The Global Power Shift | India’s Rise in AI & Semiconductors
The session, moderated by AMD India's Jaya Jagadesh, brought together key voices from industry, government, and academia to discuss India's growing centrality in the global semiconductor and AI landscape. Panelists highlighted India's robust talent pool, deepening manufacturing and design ecosystem, and evolving policy frameworks that are propelling the country toward AI and semiconductor leadership. Key government initiatives, such as AI-specific tax holidays, the India AI Mission with significant financial outlays, and the creation of national data platforms, were showcased as evidence of sustained commitment. Private sector appetite for manufacturing and industrial growth is responding to both local demand and recent global disruptions, while public-private partnership models—mirroring global best practices in national computing strategies—were advocated to accelerate India's ascent. The discussion stressed the importance of developing sovereign capabilities, investing in local IP creation, supply chain resilience, and leveraging niche areas within the semiconductor ecosystem, particularly as the world shifts toward AI-driven industrial competitiveness.
- India is experiencing a rapid shift to the center stage of global AI and semiconductor power due to a convergence of talent, manufacturing scale, and policy momentum.
- Recent government initiatives include tax holidays for data centers and the launch of platforms like AI Kosh, supporting indigenous data and application development.
- The India AI Mission has allocated over ₹10,000 crore (approx. $1.2 billion) over 5 years, addressing seven foundational pillars for AI growth.
- Private capital commitments have crossed $100 billion, primarily targeted at scaling manufacturing, data centers, and localization efforts.
- There is growing emphasis on enhancing local IP ownership in semiconductor design and encouraging startup participation—targeting a pipeline of up to 50,000 startups.
- Panelists advised that India can create near-term value by focusing on supply chain niches (e.g., optical interconnects for AI infrastructure), not just leading-edge fabrication.
- Lessons from global public-private partnership models (e.g., U.S. National Compute Initiatives) are seen as applicable to India for aligning research, infrastructure, and industrial capacity.
- Developing sovereign AI capabilities, scalable compute infrastructure, and resilient supply chains are considered essential for India's sustained competitiveness.
Building Trustworthy AI: Foundations and Practical Pathways
The session delved deeply into the paradigm shift from general-purpose hardware to general-purpose software, emphasizing how AI, especially large language models (LLMs), is fundamentally transforming economies, labor markets, and the informational fabric of the internet. The speaker highlighted that while earlier computation revolutions were driven by the advent of universal hardware capable of running multiple applications, the current AI wave is shifting this 'generalization' onto software itself—resulting in systems that can perform myriad tasks through instructions in natural language rather than specialized software. This development is disrupting traditional software business models, eroding the economics of web-based content creation and open-source communities, and presenting new challenges regarding trust, correctness, and risk in AI deployment. The segment also introduced a comprehensive taxonomy of 37 AI safety risks developed by the speakers’ research team, reflecting an urgent call for context-sensitive risk definitions and policy responses, particularly apt for India's vast scale and unique technology landscape. The need for robust frameworks to quantify and mitigate AI-related risks, and to align AI behavior with human expectations, was underscored as a central imperative for the forthcoming AI era.
- AI is shifting the paradigm from building specialized software for each task to developing general-purpose software that can perform multiple tasks via natural language instructions.
- This shift is leading to the collapse of traditional software and web-based economies, including web design firms and ad-supported content sites in India, as their core value propositions are rapidly eroding.
- Visit rates for top search engine results have plummeted from 1 in 6 to 1 in 1,500 due to users turning to AI chatbots for information, undermining the business models of content sites.
- Open-source software communities face existential threats as AI systems can now generate code and solutions without requiring users to access original repositories or libraries.
- The accessibility of LLMs has democratized powerful AI, enabling non-technical users to build software or generate content, while simultaneously introducing critical risks around ambiguity, alignment, and correctness.
- The team unveiled a new taxonomy of 37 distinct AI safety risks, now available online, tailored to India's context and challenges of scale.
- The pressing challenge for policy and research is to quantify and mitigate risks defined by the likelihood and severity of undesirable outcomes, especially in high-stakes sectors like education and healthcare.
- Existing global AI risk frameworks do not adequately account for the unique scale and practical challenges found in India.
Building Inclusive Societies with AI
The session at the India AI Impact Summit 2026 focused on the transformative role that AI and digital technologies can play in empowering India's massive informal workforce—comprising over 490 million individuals such as carpenters, plumbers, electricians, and farmers—who are often excluded from technological advances. Esteemed panelists representing industry, the development sector, and the government discussed systemic challenges faced by informal workers, including lack of discovery and trust, irregular demand, delayed payments, insufficient upskilling, and limited access to social protection. Key recommendations included creating a digital marketplace for verifiable credentials, facilitating timely and transparent payments, and implementing digitized, targeted upskilling initiatives. The panel highlighted the urgent need for accountable execution bodies to move beyond reports to impactful, on-ground solutions. Notable insights pointed to the necessity of differentiated, region- and persona-specific interventions, particularly for India’s most vulnerable, to drive inclusion, productivity, and sustainable growth in tandem with the nation’s AI revolution.
- India's informal workforce constitutes 490 million people, forming the backbone of the country's economy yet often excluded from AI-driven advancements.
- Systemic challenges identified include: discovery/trust deficits, sporadic demand, delayed/fair payment issues, lack of upskilling and credentialing, and poor access to protections.
- A digital marketplace and platform were recommended for aggregating opportunities, credential verification, and timely payments, with accountability in execution cited as a critical gap.
- Maharashtra's newly constituted Department of Skills, Employment, Entrepreneurship, and Innovation oversees over 1,000 ITIs and newly established public skill universities, focusing on accredited, inclusive skilling programs—also targeting vulnerable groups such as prisoners, people with disabilities, women, and tribal populations.
- Development leaders underscored the extreme vulnerabilities in India’s bottom quartile—especially in poverty-stricken regions—highlighting that up to 200 million Indians remain in poverty and require distinct, targeted interventions.
- Calls for person- and region-specific solutions, not "cookie-cutter" approaches, acknowledging unique challenges faced by cultivators, artisans, textile, and trade workers.
- The importance of moving from policy recommendations to accountable, impactful implementation, with clear ownership over execution, was repeatedly emphasized.
AI Automation in Telecom: Ensuring Accountability and Public Trust| India AI Impact Summit 2026
The session at the India AI Impact Summit 2026 focused on leveraging AI-driven operations to build customer trust within the telecom industry. The panel brought together senior experts from telecom operators, government, and industry bodies like GSMA and C-DOT, who discussed the dual challenges of innovation and privacy in the face of rapidly evolving telecom fraud. AI’s pivotal role in fraud detection, spam prevention, cyber security, disaster management, and customer transparency was extensively highlighted. Notable successes included the disconnection of 2.1 million fraudulent SIM cards using AI, cross-industry anti-scam initiatives, and deployment of AI-powered applications such as C-DOT’s Fraud Pro, Chakshu, and Sanchar Sathi with massive adoption (18 million+ downloads). Speakers emphasized the importance of outcome-focused collaboration across stakeholders, balancing regulatory and customer service needs, and India’s global leadership in AI-powered telecom innovation, with cross-border relevance and exportability of domestic solutions.
- AI-driven tools have facilitated the disconnection of 2.1 million fraudulent mobile numbers in India.
- C-DOT's Fraud Pro platform is used for deduplication across telecom, banking, and government databases, leading to sector-wide fraud prevention (70 lakh connections self-disconnected).
- Chakshu and Sanchar Sathi apps have seen exceptional adoption, with Chakshu used for crowdsourcing spam reports and Sanchar Sathi boasting 18+ million downloads and 2.5 billion website hits.
- AI-based risk indicators mandated by RBI are now widely utilized by banks to halt suspicious financial transactions in real-time.
- Advanced AI is being employed for active cyberattack defense, real-time fraud/spam call blocking, and federated disaster management information.
- GSMA highlighted its global anti-scam task force spanning 39 organizations and 17 countries, emphasizing industry-driven innovation over regulation-heavy approaches.
- Panelists stressed the need for human oversight in AI decision loops and the ongoing adaptation to new cross-border fraud tactics.
- India is positioning itself as a telecom AI superpower, exporting and sharing its homegrown scalable solutions globally.
Capacity Building in Digital Health
This session at the India AI Impact Summit 2026 brought together leading voices from healthcare, regulation, and technology entrepreneurship to discuss AI-enabled capacity building in India's health sector. Key insights included the Indian Nursing Council's (INC) significant curricular updates, notably the integration of AI and digital health competencies, simulation-based training, and digital courses for 2.2-3 million Indian nurses—around 10% of the world’s nursing workforce. The panel also addressed systemic challenges, such as the underutilization and professional status of pharmacists; the global shortage of healthcare workers; the economic and climate-related impacts of these shortages; and the urgent need for the Indian health system to break silos and adopt digital solutions, including AI-powered health ERPs and telemedicine. Emphasis was placed on designing scalable, context-adaptive AI solutions and fostering change management and a growth mindset. There was a call for a holistic, ecosystem-driven approach wherein startups and health tech companies play a dual role: modernizing current practices and co-creating the next generation of the healthcare workforce.
- INC updated nursing curriculum in 2021 to integrate AI and digital health, making simulation labs mandatory and distributing VR and training equipment.
- 2.2-3 million Indian nurses now trained in digital competencies; INC trains 250,000 new nurses annually.
- Two National Reference Simulation Centers established—one in Gurgaon and another recently in South Ballot—to facilitate technology-driven training.
- Faculty digital preparedness initiatives: Over 2,000 nursing faculty trained to use simulators.
- Launched a six-month professional digital nursing course for in-service nurses, linked to Continuing Nursing Education (CNE) credits.
- Online registration and digital professional development opportunities for nurses rolled out nationwide.
- Global shortage of healthcare workers estimated at 10-12 million; causes a loss of about 15% of global GDP (~$18 trillion out of $120 trillion economy).
- Nurses are in such high demand globally that the US offers special visas for nursing professionals.
- AI and digital health seen as 'low-hanging fruit' for ramping up healthcare delivery and overcoming workforce shortages.
- Vision for interoperable, ecosystem-wide health ERPs to break silos among Indian healthcare stakeholders.
- Push for global accessibility: Indian doctors could remotely serve patients globally (e.g., Kenya), leveraging digital tools.
- Emphasis on 'mindset change'—not just technology adoption—for pharmacists, nurses, and health workers to achieve system transformation.
- Tech companies advised to build scalable—both in volume and complexity—AI solutions to suit India’s diverse healthcare readiness levels.
- Entrepreneurs called to co-create next-generation healthcare workforce and ensure technologies support users at every stage of maturity.
Smaller Footprint, Bigger Impact: Building Sustainable AI for the Future
The opening session of the India AI Impact Summit 2026, co-chaired by France and India, established sustainable and resilient AI as a top priority on the global digital transformation agenda. Featuring keynote addresses from Anne Luenf, France's Minister Delegate for AI and Digitalization, and Dr. Tafi Jalassi, UNESCO's Assistant Director General, the session announced major multi-stakeholder initiatives addressing AI's energy consumption, environmental footprint, and equitable deployment in resource-limited settings. Central announcements included the rapid growth of the Sustainable AI Coalition—now with 220+ partners and 15 countries—and the launch of the Resilient AI Challenge, an international competition to drive the development of energy-efficient, compressed AI models. Insights from global leaders including panelists from Google, Mistral AI, Kenya, and India underscored the need for both policy innovation and technical advances, with a consensus that resource-efficient, inclusive AI is essential for equitable digital progress. The session emphasized collaborative action, measurement standards, and the urgent shift from bigger, more energy-intensive models to leaner, more adaptable systems that serve both people and the planet.
- Anne Luenf (France) highlighted that AI's growing energy demands may outpace green energy progress, setting the context for policy intervention.
- The Sustainable AI Coalition, co-founded by France, India, UNEP, and ITU, expanded from 90 to over 220 partners and now includes 15 countries (the Netherlands joined this year).
- In 2026, the coalition will launch 'AI research pitch sessions' to fund and connect university research with industry, targeting sustainable AI.
- The coalition, with ITU, IEEE, and ISO, released the second edition of global guidelines on AI environmental sustainability standardization.
- France is implementing policies for low-carbon, renewable-powered, and leaner AI hosted in green data centers.
- India, France, and UNESCO jointly launched the 'Resilient AI Challenge' to spur innovation in compressed and energy-efficient models; submissions will be ranked for both accuracy and energy use.
- Dr. Tafi Jalassi (UNESCO) stated generative AI now serves 1+ billion users, with inference consuming hundreds of gigawatt hours annually—on par with millions of people's electricity use in low-income nations.
- A single large AI model can consume over 1,000 megawatt-hours to train, emphasizing the urgent need for efficiency.
- UNESCO research shows design optimizations (model compression, specialized architecture) can cut AI energy use by up to 90% without sacrificing performance.
- Panelists from Google, Mistral AI, Kenya, and India discussed real-world initiatives: Kenya's energy mix is 95% green, and Google’s Gemini and Gemma model families span the performance-efficiency frontier, with deployment of 23 optimized models on India's AI Kosh platform.
- Global policy frameworks now embed sustainable AI: the UN Digital Compact and UN Environment Assembly resolutions reference sustainability in AI.
- The Resilient AI Challenge will culminate with winners announced at the AI for Good Summit in Geneva in July.
Agents of Change: AI for Government Services & Climate Resilience
The session commenced with a keynote from Minister Babu of Telangana, who presented an ambitious vision for the future of AI-driven governance in India. Emphasizing the transition from traditional digital tools to advanced agentic AI systems, Minister Babu detailed Telangana’s pioneering initiatives, including AI-enabled disaster prediction, resilient digital infrastructure, and inclusive agricultural advisories. Notably, Telangana has established India's first sovereign AI nerve center, ICON, and a comprehensive open data exchange platform encompassing 84 datasets. These foundational efforts aim not only to improve government service delivery and disaster response but also to foster a new paradigm where AI is treated as vital public infrastructure and a co-governor alongside human officials. The panel discussion that followed reinforced the seismic industry shift from simple, prompt-based generative AI toward fully-acting, end-to-end AI agents that deliver real-world value, noting the importance of governance, trust, and strict guardrails, particularly within the public sector where stakes are high. Participants highlighted that the adoption of agentic AI in India is both an example for the global south and a harbinger of a fundamental change in public administration models.
- Telangana is the first state in India to launch a sovereign AI nerve center (ICON) as an innovation and R&D hub, positioning AI as public infrastructure.
- Creation of the Telangana Data Exchange Platform (open data pipeline) with 84 datasets, enabling anticipatory public services in areas such as healthcare and agriculture.
- AI agents are being deployed to predict disasters (like floods on the Musi river), deliver services proactively, and reduce the gap between environmental events and government response (e.g., faster insurance settlements for farmers).
- State-wide deployment of solar-powered edge computers ensures continuity of government services amidst power grid failures.
- The upcoming AI City and Bharat Future City aim to be net-zero, self-governing urban environments with embedded AI for sustainability and resource management by 2035.
- Adoption of agentic AI represents a shift from digital support tools (co-pilots) to fully-acting autonomous agents capable of end-to-end process execution and decision support for governance.
- Emphasis on guardrails, governance, trust layers, and auditability of AI agents, especially in high-stakes public sector applications like procurement.
- Panel consensus: The biggest shift in AI from 2025 to 2026 is moving from task-specific AI to system-level agentic AI that can reason, act, and deliver measurable business and governance value.
- Telangana’s AI implementation model is positioned as a precedent for the Global South and emerging economies.
AI as critical infrastructure for continuity in public services
The session at the India AI Impact Summit 2026 concentrated on the importance of secure, resilient, and trustworthy AI systems within national and international frameworks, using Poland’s experience as a case study. Speakers highlighted lessons from digital governance, emphasizing the interdependence of cybersecurity, AI, and digital cooperation—including the vital protection of critical infrastructure and the development of national language models. The International Telecommunication Union (ITU) underscored ongoing efforts in creating robust global AI standards for interoperability and trust, referencing more than 200 approved and 200 pipeline standards. Panelists outlined that multistakeholder participation and transparent, accountable processes are crucial for building public trust in AI governance, further noting the necessity of community-driven inclusion, especially considering linguistic and contextual diversity. The discussion also addressed the practicalities of regulatory alignment for international trade, citing the EU AI Act’s impact on Indian businesses and advocating for 'sandbox' solutions to ease compliance and market entry. From the industry perspective, expert insights emphasized the operational challenges of scaling AI across regions while remaining compliant with fast-evolving, diverse regulatory landscapes, advocating the adoption of AI compliance tools and the importance of continuous adaptation to multinational standards.
- Poland's digital strategy emphasizes cybersecurity, the support of local governments, and AI integration for protecting critical infrastructure (energy, water, healthcare).
- Poland developed national LLMs ('B' and a second model in collaboration with academia and private sector) to bolster both public and private sector competitiveness.
- The International Telecommunication Union (ITU) has over 200 approved and 200 in-pipeline AI standards, aiming for nearly 500 standards to promote global interoperability.
- AI standards focus on shared data formats, standardized APIs, harmonized terminology, and protocols to enhance cross-border system compatibility.
- Multistakeholder governance (governments, civil society, technical community, private sector) and transparent processes are key to public trust in AI.
- Community inclusion, linguistic diversity, and accessible feedback loops ensure local trust and broader AI adoption.
- The EU AI Act (2026) provides a regulatory 'playbook' for safe AI deployment; Indian companies ready for compliance gain easier access to EU markets.
- 'Sandbox' regulatory frameworks in the EU (e.g., France’s accelerator with 10 Indian AI firms in 2025) help businesses test compliance and scale internationally.
- Private sector leaders recommend adopting complex AI compliance tools to manage rapidly changing multinational standards and regulations.
Building the Workforce: AI for Viksit Bharat 2047
The opening session of the India AI Impact Summit 2026 set a visionary tone focused on ethical, collaborative, and capacity-driven adoption of AI for public good. Panelists and speakers highlighted the transformative potential of AI, likening its impact to electricity, and stressed India's unique third pathway for AI governance rooted in trust, public infrastructure, and capacity building. Key announcements included the launch of a blueprint for a Digital Capacity Building Alliance—aimed at upskilling public officials and forging international partnerships—and an emphasis on sector-specific small language models and localized AI solutions. Leaders stressed the importance of ethical frameworks, workforce transformation, re-engineering legacy systems, and international collaboration (notably with Brazil) to ensure safe, inclusive, and equitable AI deployment across India’s vast governance ecosystem.
- The summit is thematically aligned with 'AI for economic development, social good, safe and ethical AI, and human capital'.
- Explicit recognition of India's role as a global leader in AI-enabled governance, pursuing a third 'partnership-based' way between US market-led and China state-led models.
- Emphasized shift toward small, sector-specific language models and decentralized AI for local context solutions, especially for edge devices.
- Launch of a new blueprint for a 'Digital Capacity Building Alliance' to provide a shared training framework and digital public good for public officials.
- Institutional models like the Agra Community Portal have achieved large-scale impact in capacity building across India’s diverse governance ecosystem.
- Calls for re-engineering legacy government IT systems to leverage AI more holistically, including multilingual and edge-device deployment.
- Stressed the need for robust ethical, accountable, and inclusive governance frameworks, mirroring the Prime Minister’s recently outlined vision.
- Highlighted ongoing successes such as the Tata AI Saki programme empowering rural women with AI tools for livelihoods.
- Panel featured key stakeholders from government, digital infrastructure (Google Cloud), international governance, and public service innovation.
- Advocated for international collaboration, notably between India and Brazil, on trustworthy, aligned AI paradigms.
Scaling AI for Billions: Building Digital Public Infrastructure
This session at the India AI Impact Summit 2026 explored the intertwined evolution of AI and cybersecurity, highlighting how AI is both revolutionizing security capabilities and introducing unprecedented vulnerabilities. Panelists emphasized that as AI is increasingly embedded not just at the application layer but deep within digital and critical infrastructure, the traditional security paradigms are being strained. The discussion underscored a significant 'ambition versus reality' gap: while nearly 90% of enterprises seek to deploy AI-powered agents in 2026, only a fraction have robust data, compute, and governance capabilities in place. Challenges range from the technical (model poisoning, fragility of digital infrastructure, increased attack surfaces) to organizational (lack of preparedness, unclear boundaries between malfunction and security issues) and societal (deepfakes, erosion of trust, need for corporate AI responsibility). The panel collectively called for a foundational rethink, proposing the need for AI operating systems with built-in trust and governance, and an industry-wide shift toward 'corporate AI responsibility' to ensure robust, resilient, and trustworthy AI-enabled environments.
- AI and cybersecurity are converging, creating both new defenses and novel attack surfaces.
- Critical infrastructure and enterprise digital foundations are described as 'fragile,' with AI set to multiply existing vulnerabilities.
- Adoption rates are high—90% of surveyed Indian large enterprises aim to deploy AI agents this year, but only a third have adequate data governance and a quarter have sufficient compute capacity.
- Model poisoning, open-source vulnerabilities, and adversarial AI usage represent major new security challenges.
- 'Ambition vs. reality' gap: aspirations for AI deployment vastly outpace organizational and technical readiness.
- Strong call for 'AI operating systems' that combine context, agency, and—crucially—trust and governance layers.
- Enterprises largely unaware or underprepared; current approaches likened to building skyscrapers on bungalow foundations.
- Social engineering and deepfakes powered by AI make human users a critical weak link.
- Risk of undetected model drift, unclear distinctions between AI malfunction vs. genuine security breach.
- Panel proposes evolution from corporate social responsibility to 'corporate AI responsibility'.
Responsible AI in India: Leadership, Ethics & Global Impact
The session, 'Responsible AI: From Principles to Practice in Corporate India,' focused on the urgent transition from merely pledging responsible AI principles to implementing tangible, provable practices within Indian enterprises. Hosted by Adobe and FIG, with keynotes by Andy Parsons (Adobe's global head for content authenticity) and panel moderation by Shanti Mallaya (Economic Times), the discussion underscored that 2026 marks a pivotal year as India joins the global regulatory landscape alongside the EU and the US in mandating responsible AI, with new IT rules coming into effect. Andy Parsons outlined practical steps Adobe has taken, notably the creation and open-standard promotion of C2PA (Coalition for Content Provenance and Authenticity), an initiative ensuring robust content transparency as a foundational tool against AI-generated misinformation. He emphasized that responsible AI must become an operational discipline—baked into products and strategies, not appended onto them—to earn and maintain user trust at scale. The session also highlighted ongoing challenges in adoption, uneven consumer and platform engagement, and the importance of cross-industry, open, interoperable standards. The ensuing panel, featuring leaders from Air India, NPCI, RPG Group, and Adobe, aimed to provide diverse, industry-wide insights into institutionalizing responsible AI aligned with India’s vast digital ecosystem and policy ambitions.
- 2026 is a watershed year as responsible AI shifts from voluntary principles to enforceable compliance within global and Indian enterprises, spurred by regulations like the EU AI Act, California's first AI law, and India's updated IT rules.
- Adobe's C2PA (Coalition for Content Provenance and Authenticity) initiative is now a five-year-old, open-source global standard designed to embed content transparency and provenance across digital tools—a model for responsible AI at scale.
- C2PA is backed by major players (Adobe, Microsoft, BBC, OpenAI, Sony, Meta, Nikon, Qualcomm, etc.) and freely accessible to all, ensuring inclusivity for both independent creators and large enterprises in India.
- Business use of generative AI is accelerating across sectors (marketing, media, customer engagement), making traceability, transparency, and accountability operational needs, not just ethical considerations.
- Current challenges include inconsistent adoption, platforms stripping metadata (removing transparency), low consumer awareness of content credentials, and the lag in user interface adoption.
- Legislation is a catalyst for responsible AI but must be combined with industry cooperation, open standards, and interoperable infrastructures—India's track record with UPI demonstrates the power of such approaches.
- A cross-industry executive panel (Air India, NPCI, RPG Group, and Adobe) is convened to discuss translating responsible AI principles into pragmatic, enterprise-wide action aligning with India's leadership in digital innovation.
How AI Drives Innovation and Economic Growth
The session at the India AI Impact Summit 2026 featured key perspectives from global development experts, including the World Bank and leading economists, on the promise and challenges of AI integration, especially for emerging markets like India. Speakers highlighted AI as a transformative technology with the potential to help developing countries leapfrog traditional development barriers and drive economic growth by enhancing productivity, improving service delivery in sectors such as agriculture, finance, and healthcare, and fostering local innovation through 'small AI'—practical, affordable, and context-specific applications. The discussion recognized significant hurdles including infrastructure gaps, foundational skills deficits, digital divides, and the risk of job displacement, particularly for lower-skilled, document-based roles. Panelists stressed the necessity of both strong public investments in infrastructure, policy, and regulatory frameworks, and a conducive business environment to enable private sector-led innovation and ensure AI's benefits are widely shared. India was cited as a leading example given its robust digital infrastructure and ongoing government efforts towards 'AI for all.' Ultimately, the panel called for pragmatic, policy-first approaches balancing optimism and caution, creative destruction and social safety, to ensure AI narrows—rather than widens—the development gap.
- World Bank and experts view AI as a structural transformation, particularly potent for emerging markets.
- World Bank study: 15-16% of jobs in South Asia are highly complementary to AI-enabled skill expansion.
- 'Small AI'—applications tailored to low-connectivity, low-resource contexts—is a strategic focus for development impact.
- AI applications already benefit diverse sectors: helping farmers detect crop issues, supporting nurses with diagnostics, and aiding financial institutions in credit assessment.
- Risks: Job losses anticipated, especially among entry-level, document-based roles; digital divide challenges around infrastructure, reliable electricity, internet, and foundational literacy/numeracy skills.
- India is highlighted as a global leader in digital ID, payment platforms, and small AI innovation, with targeted state-level and federal strategies towards inclusive AI deployment.
- World Bank supports advisory and backbone information services, rather than direct app development, assisting both governments and the private sector.
- Emphasis on 'AI for all'—ensuring disadvantaged groups have access and that programs function offline when required.
- Panelists warn that AI alone will not solve underlying issues in emerging economies; broader business and regulatory reforms remain necessary.
- Forthcoming World Development Report 2026 by the World Bank will focus specifically on AI and development.
- Call for balanced, pragmatic policy frameworks to harness AI's benefits while mitigating associated risks, particularly in closing (not widening) the development gap.
Safeguarding Children with Responsible AI
The session at the India AI Impact Summit 2026 focused on the vital issue of AI governance, especially regarding children’s interactions with AI systems. Baroness Joanna Shields emphasized the need for proactive regulation, drawing distinctions between the shortcomings of post-harm regulatory models used in social media and the unique, adaptive, and intimate nature of AI engagement with children. She stressed that AI should not be field-tested on children and called for age-appropriate safeguards by default. This keynote was followed by an engaging speech from Rahul John Aju, a young AI innovator, who highlighted the complexities young people face navigating AI, particularly concerning misinformation, privacy, and the lack of clear guidance for digital safety. Rahul illustrated the current demand for practical AI education through his free courses, which drew over 700,000 participants, and underscored the importance of teaching critical thinking, digital literacy, and the synergy of ‘natural’ and ‘artificial’ intelligence. The session culminated in the launch of a panel featuring representatives from UNICEF, OpenAI, and other industry leaders, who set out to explore how AI can foster children's agency and well-being, rather than inadvertently narrowing it or introducing unintended harms. The focus was on ensuring that AI systems expand children's capacity to learn, create, and think critically, while safeguarding their safety, identity, and emotional health.
- Baroness Joanna Shields called for preemptive, child-centric AI governance and distinguished AI’s adaptive model from prior social media platforms.
- AI's simulation of intimacy raises unique mental health, identity, and safety challenges for children, necessitating robust safeguards and age-appropriate design defaults.
- Children’s inability to distinguish between genuine human interaction and emotionally responsive AI was emphasized as a core risk.
- Rahul John Aju’s free AI educational courses reached over 700,000 participants, demonstrating significant grassroots demand for AI literacy.
- Rahul developed ‘Rescue AI,’ a tool that summarizes terms and conditions and alerts users to contract risks, aiming to empower young users against opaque digital practices.
- Consensus emerged that AI education must build critical thinking and responsible usage before unleashing the power of AI tools, paralleling mathematics fundamentals before calculator use.
- A recurring theme was bridging the gap between adult-created safety norms and children’s lived digital experiences, calling for greater youth involvement in AI policy discussions.
- The session inaugurated a high-profile panel, including UNICEF’s Thomas David, OpenAI’s Chris Lehan, and others, to explore how AI can augment—rather than limit—children’s agency and creativity.
U.S. AI Standards: Shaping the Future of Trustworthy Artificial Intelligence
The panel at the India AI Impact Summit 2026 featured key representatives from leading US frontier AI companies (Anthropic, Google DeepMind, OpenAI, XAI) and senior US government officials, highlighting major industry and policy efforts to drive interoperability, open standards, and agent protocols in AI systems. The discussion centered around recent rapid investment and collaboration in developing universal technical standards and protocols—such as Anthropic’s Model Context Protocol (MCP), Google DeepMind's Agent to Agent (ATA) and Universal Commerce Protocol (UCP), OpenAI's commerce protocol, and XAI’s macroart agent project—designed to enable AI agents from different vendors to seamlessly communicate, access data, and perform tasks across platforms and organizations. These advancements allow builders globally, including in India, to leverage AI through improved interoperability, data portability, and shared best practices. The session also announced the Agent Standards Initiative led by NIST and the US Center for AI Standards and Innovation, which marks a strategic progression from AI safety to the broader adoption of standards and innovation. Collectively, these developments aim to democratize AI benefits, foster global cooperation, and empower industries and governments worldwide to build advanced applications upon open, secure, and widely adopted AI protocols.
- US AI companies are investing $700 billion in AI infrastructure in 2026, underlining the high stakes and rapid growth in the sector.
- Anthropic’s Model Context Protocol (MCP) is becoming an industry-wide open standard for connecting AI systems to diverse tools and data sources.
- Other major agent protocols highlighted include Google DeepMind’s Agent to Agent (ATA) protocol, Universal Commerce Protocol (UCP), OpenAI's commerce protocol, and XAI’s macroart agent project.
- Protocols like MCP and 'skills' (task-specific instruction sets) emphasize open standards, interoperability, and data portability, preventing vendor lock-in and enabling easier switching between AI providers.
- Google DeepMind is working with global companies (such as Walmart, Target, Flipkart, Infosys) to implement these standards for transformative business and commerce use cases.
- The discussion emphasized the importance of technical and testing standards to ensure AI systems are reliable, secure, and interoperable across vendors and jurisdictions.
- OpenAI and industry partners underscored the collaborative nature of these standards efforts, comparing them to universal conventions like traffic signals to illustrate shared global infrastructure.
- XAI stressed how open, industry-agreed protocols enable broader innovation, not just among leading firms but for the entire ecosystem, facilitating public debate and regulatory evolution.
- The Agent Standards Initiative, announced during the session and led by NIST and the US Center for AI Standards and Innovation, marks a significant policy shift focusing on scalable standards and innovation rather than solely safety.
- India’s Digital Public Infrastructure (DPI) was recognized as a key data resource and reference point in global AI development.
- The panel highlighted a broader movement toward open, competitive environments for AI, supporting cross-sector and international collaboration.
Driving Social Good with AI: Evaluation and Open Source at Scale
This session at the India AI Impact Summit 2026 brought together leaders from open source, academia, and AI evaluation organizations to discuss the evolving landscape of AI model evaluation, particularly in the context of multilingual and multicultural environments. The panel emphasized the importance of open-source software and community-driven initiatives for ensuring safety, robustness, and contextual relevance in AI. They highlighted the concept of AI 'red teaming' and the rise of 'contextual evaluations' that go beyond conventional benchmarks to stress-test models using domain expertise. Multiple speakers underscored the historical shift in India from being consumers to major contributors in the global open-source ecosystem, referencing projects like the Indic LM Arena. The launch of open-source AI red teaming software by Humane Intelligence (backed by Google.org) was announced, intended to support broader accessibility and standards in AI evaluation. The need for standardized evaluation artifacts—such as 'eval cards'—was discussed as a way to promote replicability and transparency. Overall, the session reflected a call for collaborative approaches, open resource sharing, and humility regarding the challenges in truly contextual AI model assessment.
- Open-source evaluation frameworks are being developed to enable broad access and collaboration, with Humane Intelligence announcing the upcoming release of their AI red teaming software under an open-source license (supported by Google.org).
- Red teaming methodologies, borrowed from cybersecurity, are being adapted to probe AI models for vulnerabilities across domains such as health, food security, and education.
- Panelists stressed that safety evaluations must be context-aware—highlighting varying needs between regions like India, Sub-Saharan Africa, and others.
- India has transitioned from passive open-source consumers to active global contributors, with universities like IITs and initiatives such as the Indic LM Arena playing leading roles.
- Community participation is seen as vital for sustaining open-source AI evaluation projects, lowering technical barriers, and pooling expertise and resources.
- Calls were made for standardized evaluation artifacts (e.g., interoperable 'eval cards' or model cards) to ensure apples-to-apples comparisons between models and contexts.
- Challenges remain in achieving comprehensive contextual coverage in model evaluation, requiring ongoing humility and iteration.
Collaborative AI Network: Strengthening Skills, Research & Innovation
The session at the India AI Impact Summit 2026 centered on the concept of AI diffusion, highlighting both the challenges and opportunities in democratizing foundational AI resources like data, compute, talent, and models. Mr. Sorab Gar, Secretary of MOSPI India, emphasized the necessity for AI-ready datasets, focusing on qualities such as discoverability, trustworthiness, interoperability, and usability. He introduced 'METRI', a proposed multi-stakeholder platform aimed at collectively managing foundational AI resources in a modular, voluntary manner, with the ultimate goal of supporting India’s journey towards Digital Public Infrastructure (DPI) for AI. The panel discussion underscored global collaboration, particularly involving partners from Kenya, Italy, and Brazil, stressing the importance of context-relevant AI solutions, breaking down data/resource silos, and building institutional trust to move from pilot projects to scalable, impactful AI adoption. Examples from Africa and Brazil highlighted shared infrastructures and policy frameworks fostering inclusivity and trust for AI-driven transformation, especially in addressing local needs and maximizing economic and social impacts in the Global South.
- AI diffusion is framed as a crucial step from invention (largely in the West) to widespread impact and adoption, particularly in the Global South.
- Four foundational AI resources—compute, data sets, talent, and models—are identified as prerequisites for impactful AI diffusion.
- India is working towards making datasets 'AI-ready', focusing on discoverability, trustworthiness, interoperability, and usability, alongside robust metadata standards.
- A new multi-stakeholder platform 'METRI' (Multi-stakeholder AI for a Resilient and Trusted Infrastructure) was proposed to democratize AI resources on a modular, voluntary, and non-binding basis.
- Panelists highlighted the ‘100 AI Diffusion Pathways by 2030’ initiative to drive AI implementation across various sectors and countries, starting with tripartite collaboration among Kenya, Italy, and India.
- The G7 AI Hub was mentioned as a global initiative to unlock foundational AI resources for Africa, addressing access to data, compute, and the business case for data centers/GPUs.
- Examples from Brazil illustrated data-driven government AI initiatives emphasizing personalization, interoperability, and joint platforms across ministries to overcome data silos.
- The importance of governments being involved from the outset in AI project design to ensure scalability, trust, and effectiveness—moving from isolated pilots to population-scale impacts.
- Institutional trust and inclusivity are viewed as central to both user adoption of AI outputs and the legitimacy of AI-driven public services.
Transforming Health Systems with AI: From Lab to Last Mile
The session at the India AI Impact Summit 2026 focused on showcasing AACare's AI-powered, end-to-end healthcare solution aimed at alleviating the fragmentation and inefficiencies in patient-doctor interactions and medical record management. The solution seamlessly integrates India's digital health infrastructure (ABHA), AI-enabled patient history summarization, multilingual support, and intelligent appointment scheduling, enhancing the patient journey from symptom reporting to post-consultation follow-up. During the panel, leading global health experts and regulators discussed the broader implications of AI deployment in healthcare, emphasizing the importance of human-centered design, the challenges of scalability and linguistic diversity, and the need to maintain a people-first ecosystem even as technology advances. Concerns about meaningful AI integration—ensuring humans remain central and risks are recognized alongside opportunities—were highlighted, marking a shift from hype toward more nuance and responsibility in AI's role within healthcare.
- AACare introduced a comprehensive, AI-powered healthcare platform that solves key challenges in information fragmentation, ease of medical history sharing, and minimizing doctors' administrative burden.
- The platform leverages India's ABHA digital health ID system, integrating patient health records, image-based data extraction, and natural language AI assistance in local languages.
- Patient experience innovations include AI-driven, multilingual conversational bots for symptom reporting and intelligent, prompt-based appointment scheduling.
- Doctor-side AI features transcribe and summarize consultations, automatically populate EMRs, and flag medication errors (e.g., detecting allergies), thereby reducing errors and saving time.
- All consultation data and advice are translated into the patient's preferred language and saved into their digital health record for future reference.
- Key technical challenges discussed include large-scale, multilingual data generation, verifiable model evaluation, and the issue of scaling to diverse populations.
- Panelists stressed the need for human-centered AI, highlighting the importance of broader ecosystem and social context over mere technological innovation.
- Global regulatory perspectives were shared regarding harmonization, risk-awareness, and the value of real-world, inclusive dialogues as the sector evolves beyond hype.
- Personal anecdotes and reflections underlined the tangible relevance and empathy driving this AI healthcare transformation.
AI for Good: Technology That Empowers People
The session opened with Fred Werner from the International Telecommunication Union (ITU), emphasizing the rapid evolution of AI from conceptual hype to impactful real-world applications, particularly their 'AI for Good' initiative involving 50 UN sister agencies. He highlighted the shift towards 'zero-click' AI agents, embodied AI, and brain-computer interfaces, stressing the importance of international collaboration for AI governance, standards, and equitable progress. India's specific contributions were showcased by Professor Vijay from IIT Delhi, who focused on edge AI's critical role in ensuring low-latency, high-reliability solutions—especially vital for the Global South—through haptics research, split control architectures, and standards development. Vijay cited ongoing joint efforts between Indian bodies like TSDSI and ITU around dynamic AI models, security, tactile applications, and preparing for 6G networks, while also advocating for the inclusion and upskilling of diverse communities. The session broadly underscored how AI and edge computing are being positioned as enablers of inclusive technological advancement in emerging economies, with a firm commitment to ethical use, standards, and context-sensitive innovation.
- ITU's 'AI for Good' is a global initiative involving 50+ UN sister agencies to promote AI for social benefit.
- The program is organized around three pillars: solutions (e.g., machine learning challenges and startup competitions), skills (including an AI skills coalition and machine learning sandboxes for government upskilling), and standards (with over 400 AI standards published or in development).
- Generative AI, AI agents, and the rise of 'zero-click' automation (AI acting proactively) are current trends addressed.
- The focus on embodied and physical AI (including robotics and brain-computer interfaces) and emerging areas like Space AI.
- IIT Delhi's research on edge AI centers around real-time, reliable haptic feedback and intent-based signal processing, enabling critical use cases in the Global South where context-specific solutions are vital.
- Edge AI's importance is magnified for low-latency, high-precision applications, which are crucial for sectors such as healthcare and disaster response.
- Indian initiatives via TSDSI include standards and technical reports on dynamic AI/ML models for V2X (vehicular-to-everything), security, digital twinning, architecture for tactile applications, and preparations for 6G/AI-native networks.
- Global standards forums—ITU IMT 2030, 3GPP, oneM2M, among others—are actively shaping the edge AI landscape, with India playing an increasingly prominent role.
- There is a strong push for inclusivity through regular flagship conferences, educational integration, and technical collaboration to ensure the benefits of AI are shared broadly across developing regions.
Trusted Connections: Ethical AI in Telecom & 6G Networks
The inaugural session on 'Responsible AI in Telecom' at the India AI Impact Summit 2026, co-hosted by the Telecom Regulatory Authority of India (TRAI) and India AI, underscored the foundational and transformative role of AI in Indian telecom. Marking TRAI’s 29th anniversary, the session featured the Honorable Chairman Anil Kumar Loti who highlighted India’s leadership in AI-driven, large-scale telecom networks and regulatory innovations. Key themes covered included the shift to AI-native 6G networks, regulatory frameworks balancing innovation with safeguards, massive operational impacts (such as filtering billions of spam calls), and the need for ethical, accountable AI adoption. The session set the stage for two plenary panels: the first, focusing on network evolution towards autonomy and sustainability, and the second, on trust, governance, and consumer protection. Industry leaders from global technology firms (Ericsson, Qualcomm, Nokia, Tis Networks) echoed India’s readiness to leverage near-complete 5G coverage, scale up to 6G, and export AI-driven telecom solutions. Discussions addressed AI’s role in network management, customer experiences, fraud prevention, sustainability, security, and the vital need for international standardization and cooperation. The event positioned India as a frontrunner in building the world’s most robust, ethical, and AI-enabled telecom infrastructure.
- TRAI and India AI (MeitY) co-hosted the session to address responsible AI in telecom, coinciding with TRAI’s 29th anniversary.
- AI is now considered intrinsic to the design and operation of next-gen (6G) networks, becoming a foundational capability rather than an add-on.
- India has over 1.3 billion telecom subscribers and 1 billion data users, meaning AI at population-scale is essential, not optional.
- AI and blockchain-based solutions are already blocking nearly 400 million spam calls/messages daily, and over 2.1 million spam numbers have been disconnected.
- TRAI is piloting a digital consent framework to grant users control over their communications-related consents.
- Regulatory stance includes a risk-based AI framework (July 2023), a regulatory sandbox for AI solutions (April 2024), and a focus on human-centric, accountable, and safe AI.
- AI is improving energy efficiency, network performance, and consumer safety in real-world deployments.
- India’s near-universal 5G coverage (up to 99%) provides a strong foundation for 6G and AI-driven telecom innovation.
- The network-side session zeroed in on access network evolution, core network implications, handset AI challenges, security concerns, and sustainability imperatives.
- Panelists from Ericsson, Qualcomm, Nokia, and Tis Networks emphasized India’s digital infrastructure leadership and the country’s role in setting global standards for AI-native networks.
- The MANOV vision, recently announced by the Indian PM, anchors ethical and inclusive AI governance as central to telecom’s future.
- Sessions will debate how to ensure trust, transparency, and accountability in automated, AI-powered network decisions affecting millions.
Harnessing Collective AI for India’s Social and Economic Development
The panel at the India AI Impact Summit 2026 explored the evolving relationship between artificial intelligence and societal systems, using the metaphor of Avengers superheroes to illustrate the diversity of expertise and approaches on the panel. Key themes included the distinction between intelligence and coordination in AI-enabled societies, and the need for AI systems to facilitate collective welfare and not merely individual optimization. The panelists discussed the growing prevalence of agent-based AI systems which interact with each other on behalf of users, highlighting both the opportunities for coordination and the risks of unintended negative consequences, such as contestation for resources or cascading system effects. The conversation also emphasized the powerful influence of recommendation algorithms on individual choices and societal attitudes, arguing that these systems do not simply reflect preferences but actively shape them, often in ways aligned with designers’ goals rather than users’ interests. Additionally, civic engagement was addressed, particularly how AI is leveraged to amplify citizen voices in governance and streamline government processes while underscoring the need for transparent, equitable frameworks to guide this technology. The session collectively called for interdisciplinary partnerships, regulatory foresight, and ethical design to ensure AI serves the collective good rather than amplifying inequalities or entrenching elite power.
- The panelists represented a spectrum of AI expertise, each likened to an Avenger character to highlight their unique perspectives on AI and society.
- Professor Seth emphasized that societal failures are often due to coordination problems, not just intelligence deficits; future AI should focus on collective coordination.
- The next phase of AI will involve 'agentic' systems—populations of AI agents working interactively, requiring frameworks to ensure their actions benefit the broader community.
- Unchecked proliferation of agentic AI could result in resource contests and negative systemic cascades, necessitating embedded social responsibility in AI design.
- Professor Nirav highlighted that multi-agent AI systems are best suited to solving socio-technical problems where local and global optima differ, such as ride-sharing, epidemics, and resource allocation.
- Recommendation algorithms do more than reflect user preferences; they actively nudge and shape users’ beliefs and decisions, raising concerns about autonomy and societal impact.
- Professor Manjunath cited research and industry experiences (e.g., Facebook) showing that recommendation systems can radically influence population behaviors and preferences.
- Andra Vasudevan spoke on using AI to amplify citizen voices and increase transparency in government, stressing the need for robust, equitable adoption frameworks, especially in diverse countries like India.
- The session moved the focus from AI as a collection of smarter apps for individuals to AI as systemic infrastructure influencing collective life, work, and societal participation.
- A recurring theme was the need for interdisciplinary collaboration (researchers, companies, NGOs, governments) in designing and promoting AI systems for equitable outcomes.
Driving Enterprise Impact Through Scalable AI Adoption
The session at the India AI Impact Summit 2026 focused on the imperative of broad, economy-wide artificial intelligence adoption, highlighting the launch of the BSA’s Global Enterprise AI Adoption Agenda. US and Indian leaders underscored the importance of integrating AI into government and enterprise, emphasizing policy shifts focused on workforce development, infrastructure (especially data as infrastructure), and practical governance. The US presented details of President Trump’s 2025 America’s AI Action Plan, which includes limiting procurement of ideologically biased AI models, rapid AI infrastructure expansion, and promoting AI technology export to trusted partners, especially India. Industry leaders from companies such as Autodesk, SAP, and Workday illustrated successful enterprise AI adoption stories and outlined the critical role of data contextualization, culturally-aligned values, and workforce skills modernization in scaling AI’s impact—framing India as a global innovation hub uniquely positioned for co-development and co-investment with the US and global partners.
- Launch of BSA’s Global Enterprise AI Adoption Agenda, built on three pillars: workforce, infrastructure/data, and practical governance.
- Emphasis on actionable recommendations for government to accelerate enterprise and government AI adoption.
- President Trump’s America’s AI Action Plan (July 2025): directs U.S. federal procurement away from ideologically biased AI, accelerates construction of AI-supporting data centers and power infrastructure, and promotes American AI exports to trusted allies.
- New U.S. AI policy emphasizes co-development, co-investment, and co-creation with trusted partners like India, including building resilient semiconductor supply chains.
- India is positioned as a key innovation partner due to its talent, digital public infrastructure, scale, and shared democratic values; major U.S. firms' largest international operations now reside in India.
- Autodesk’s AI solution proved capable of optimizing sustainable construction by simulating thousands of scenarios for affordable housing, achieving near carbon-neutrality within an ambitious timeline.
- Enterprise AI scaling requires continuous, context-rich data capture across all stages of industrial processes.
- SAP emphasized the importance of building AI systems that reflect diverse, region-specific cultural values, underpinned by a globally distributed, innovative workforce.
- Workday highlighted the urgency for India to drive strategic skills modernization and data upgradation, promoting a skills-first, agile workforce as fundamental for national AI ambitions.
AI in Mobility: Accelerating the Next Era of Intelligent Transport
This session at the India AI Impact Summit 2026 brought together key leaders from IT India, IIT Hyderabad, IIT Mumbai, the OMI Foundation, government agencies, and major industry players to discuss and frame recommendations for the future of AI in mobility. The dialogue highlighted India's unique mobility challenges, such as high accident rates and congestion, and set the stage for the transition from 'Mobility 4.0' (connected, electric, shared vehicles) to 'Mobility 5.0'—characterized by truly intelligent vehicles, infrastructure, and ecosystems powered by AI. Concrete examples included AI-enabled road safety enforcement, adaptive traffic management, real-time multimodal integration, smart incident detection, and logistics optimization. The automotive industry leaders detailed how AI will fundamentally transform the entire life cycle of a vehicle, from design conception through manufacturing, operations, maintenance, customer experience, and ultimately recycling, in line with new regulatory requirements. The session called for policy action and industry-academic collaboration to embed AI technologies at every link in the chain, with the ultimate goal of safer, greener, and radically more efficient mobility across India.
- India is transitioning from 'Mobility 4.0' (connected, shared, electric vehicles) to 'Mobility 5.0,' where AI is deeply embedded for intelligent vehicles and infrastructure.
- Despite only 1.5–2% of global vehicles, India contributes to 12% of global road accident fatalities, highlighting urgent challenges in safety.
- AI solutions discussed include: automated compliance (vehicles responding to road signs directly), AI-coordinated adaptive traffic signals, real-time multimodal integration, and smart emergency response for accidents.
- AI can dramatically shorten vehicle design and development cycles, enabling digital prototyping and predictive supply chain management.
- Automotive industry is leveraging AI for quality assurance, predictive maintenance, OTA software updates, improved customer experience, and enhancing vehicle longevity.
- End-of-life vehicle management and recycling will be transformed by AI, facilitating compliance with India's new EPR (Extended Producer Responsibility) rules across plastics, rubber, e-waste, batteries, and more.
- Cross-sector stakeholders—academia, industry, government—are contributing recommendations to shape the AI Impact Summit's proceedings and policy actions.
- Panel included top voices from IT India, IITs, OMI Foundation, Society of Indian Automobile Manufacturers, Qualcomm, and emerging highway technology firms.
Catalyzing Global Investment in AI for Health: WHO Strategic Roundtable
The panel at the India AI Impact Summit 2026 focused on the critical transition from AI's theoretical promises in healthcare to its real-world implementation, emphasizing the necessity of building systems that are safe, trusted, and equitable. The discussion covered the challenges and opportunities of integrating AI into clinical workflows, the imperative to establish 'verified AI' with transparent, auditable decision-making to mitigate risks, and the global push for diversity in data and partnership-driven investment, especially across the Global South. Highlighting pioneering initiatives such as AI-powered medical note-taking, remote telesurgery, and autonomous robotics, the speakers stressed that investment must extend beyond technology to encompass governance, workforce training, and regulatory frameworks. Panelists advocated for 'humans in the loop,' called for AI curriculum integration in health education, and underscored partnerships between countries, donors, and civil society. The session concluded with unified calls to ensure that AI's impact in health is measured by actual outcomes, equity, and sustained trust, rather than by technological novelty alone.
- Shift from pilot projects and theoretical discussions to focus on AI implementation, system integration, and measurable healthcare impact.
- Announcement of the 'verified AI' initiative to ensure AI models in healthcare have transparent and auditable decision-making processes, aiming for zero tolerance on critical errors.
- UK-funded 'Responsible AI' program expanding through partnerships in India and Africa, placing AI champions in hospitals to foster adoption and evaluation.
- Ambient AI for medical note-taking funded and evaluated, yielding significant reductions in wasted operating room time.
- Recent publication in the British Medical Journal detailing telesurgery 2.0, enabling operations from 2,500 km away with just 60 ms latency—targeting the 5 billion underserved patients globally.
- Kings College investment in medical robotics automation, with latest advances achieving 100% procedural accuracy in animal trials but highlighting the need for societal acceptance and human oversight.
- Emphasized the essential role of diverse, inclusive data to ensure AI relevance and efficacy across different populations.
- Call to integrate AI education and training into medical and nursing school curricula globally.
- Panelists advocated for strong partnerships between countries, funders, and donors aligned with national priorities, and emphasized the need for enabling policy, strategies, and investment at the country level.
- Consensus that trust—enabled by regulatory frameworks, transparency, and evidence—is the key leverage point for scaling sustainable AI investment in health.
Who Watches the Watchers? Building Trust in AI Governance
The session at the India AI Impact Summit 2026 brought together leading figures in AI governance and safety to discuss the current global landscape and challenges. Steven Claire, co-lead author of the International AI Safety Report, emphasized the report’s role as the foundational knowledge base for AI policy conversations worldwide, highlighting rapid advancements in both the deployment of AI and the technical progress in risk management—such as improved safeguards against ‘jailbreaking’ harmful model outputs. Despite these improvements, significant challenges persist: vulnerabilities and inconsistencies remain both in technical protection and in the implementation of organizational safety frameworks across companies. Heroki Habuka offered a comparative overview of national strategies, making clear that all major economies blend hard (legislative) and soft (guideline-based) regulatory approaches, but differ on sectoral versus holistic regulation and how ex-ante or ex-post accountability is enforced. He emphasized the absence of clear societal benchmarks for complex AI ethics values like privacy or fairness. Shaina Mansbach, representing the think tank Fathom, identified a growing 'trust problem' extending from the public to regulators and developers, exacerbated by the pace of AI evolution and the concentration of technical expertise in frontier labs. She called for new governance models that can confer 'earned trust' and adapt to the unique speed and complexity of contemporary AI technologies.
- The International AI Safety Report, backed by over 30 countries and hundreds of experts, is now considered the minimum foundation for global AI governance discussions.
- Worldwide AI usage has reached a billion users, with tangible impacts on productivity, labor, and security—including increased deployment of deepfakes and AI-enabled cyberattacks.
- Technical progress: Safeguards against dangerous AI behavior (e.g., 'jailbreaking') have significantly improved—what used to take minutes to circumvent now takes 7–10 hours on latest models.
- Twelve leading AI companies have adopted frontier safety frameworks, marking an increase in transparency and industry self-regulation.
- However, technical and organizational safeguards remain inconsistently applied, and vulnerabilities persist, particularly outside the leading 'frontier' developers.
- Global regulatory approaches differ: the EU favors holistic, hard-law regulation (e.g., EU AI Act); Japan and the US prefer sector-specific, soft-law or principle-based frameworks with distinct approaches to accountability.
- A central unsolved challenge is creating clear benchmarks for measuring AI system values such as privacy, transparency, and fairness.
- The ‘trust problem’ is acute across all stakeholder groups—public, deployers, regulators, and developers—due to uncertainty about safety, security, and compliance of AI systems.
- Traditional, command-and-control regulatory approaches struggle to keep pace with AI’s rapid evolution and often lack technical enforcement capacity.
- There is an urgent call for adaptive, multi-stakeholder, trust-building governance models capable of responding flexibly to emerging risks and uncertainties.
AI Collaboration Across Borders: India–Israel Innovation Roundtable
The opening session of the India AI Impact Summit 2026 underscored a rapidly deepening partnership between India and Israel in artificial intelligence, with a particular focus on leveraging joint strengths in scientific research, education, and social innovation. Key leaders from both nations emphasized the maturity of their bilateral relationship, noting past collaborations in areas like water, defense, and agriculture, and expressing a shared eagerness to extend this legacy into AI. Indian state Telangana highlighted its leadership in AI commercialization and policy, having launched the pioneering ICOM AI Hub and tailored funding for AI startups. Israeli representatives detailed the integration of AI into scientific research cycles and government decision-making, calling on India’s scale and skilled workforce to drive mutual advancement. Social innovation was likewise prominent, with Indian organizations like Action for India launching specialized AI cohorts and fostering exchanges with Israeli deep tech entrepreneurs. The dialogue consistently celebrated India’s role as a test bed for scalable solutions and frugal innovation, advocating for knowledge-sharing and co-development in areas such as personalized education and AI-enabled societal transformation.
- India and Israel stressed their longstanding partnership and mutual respect, noting seven to eight decades of collaboration across sectors.
- Israel aspires to be among the world's top three AI leaders, recognizing India as a key strategic ally for achieving scale and impact.
- Telangana state introduced ICOM, the first state-backed AI hub in India, and a dedicated fund of funds for AI and IT startups.
- Israel highlighted its progress integrating AI into government and scientific processes, and called for joint India–Israel research grants and mutual support for AI-driven science.
- Action for India (AFI) launched an 'AI impact cohort', selecting a dozen top social entrepreneurs in climate, agriculture, and healthcare from over 100 applicants.
- AFI’s approach focuses on 'true AI startups' defined by proprietary data, deep sector expertise, and advancing solutions uniquely enabled by modern AI technologies.
- Concrete Indo-Israel collaboration examples include partnerships between Israeli deep tech startups and Indian incubators (e.g., T-Hub), and pilot programs in critical sectors.
- India's large problems and contextual diversity make it an ideal test bed for frugal, scalable social innovation, with solutions adaptable globally.
- Indo-Israel education partnerships are centering around personalized, AI-enabled learning systems, with both sides seeking knowledge exchange and shared development.
- Government and ministry leaders from both countries openly called for expanding collaborations in research, education, talent development, and innovation ecosystem linkages.
How Open Networks Are Transforming the Global South
The opening session at the India AI Impact Summit 2026 centered on the transformative potential of open networks, leveraging AI as a key enabler for democratizing access and fostering inclusion, especially across the Global South. Speakers highlighted India's success with digital public infrastructure—like UPI and Aadhaar—as a blueprint for scalable, interoperable solutions that prioritize diversity, resilience, and equitable value distribution. The session underscored the importance of trust, governance, and collaboration across government, industry, startups, and civil society, alongside the necessity of embedding AI responsibly to empower all participants rather than reinforce silos or monopolies. Early use cases—ranging from energy transactions managed via simple voice notes to outreach efforts for rural women entrepreneurs—demonstrated real-world AI diffusion and impact. The panel set the agenda for the summit, framing open networks as critical economic infrastructure that must balance innovation, competition, and inclusivity as their adoption spreads across emerging markets.
- Open networks, backed by AI, are pivotal for democratizing access and agency in the digital economy, especially for marginalized and rural communities.
- India's Digital Public Infrastructure (DPI) successes such as UPI and Aadhaar have demonstrated the power of unbundling and interoperability to reduce barriers and foster competition.
- The ODC initiative was cited as an example of re-architecting markets to enable innovation at the 'edges' and support small businesses.
- Trust and governance—neutral, transparent, and inclusive—are as critical as technology itself for sustainable ecosystem transformation.
- AI must be diffused responsibly: not as a centralized gatekeeper but as an enabler for all market actors, supporting both scale and inclusion.
- Recent launches highlighted included Bharat Vistar and Mahavistar (agri networks) and the Indonesia Open Network for e-commerce, signaling pan-Global South momentum.
- Pilot projects, such as with Mandeshi Bank’s rural women entrepreneurs, evidenced how open networks and AI can unlock new markets and direct growth.
- The future trajectory for open networks will be judged by diversity and depth of participation, resilience in governance, and equitable value distribution—not just transaction volumes or sophistication of AI technology.
Shaping Trustworthy AI for Tomorrow
The session at the India AI Impact Summit 2026 addressed the growing necessity for international cooperation to develop trustworthy AI, emphasizing that no single country can achieve AI trustworthiness in isolation given the interconnected, geopolitical, and supply chain realities. The keynote highlighted how technology, particularly AI, is not only a matter of innovation and economic policy but has become central to foreign, security, and trust policies worldwide. Fragmentation—both geopolitical and infrastructural—was identified as a significant risk to global AI reliability and trust. The Norwegian approach to trustworthy AI in the public sector was presented as a holistic, evolving framework anchored in institutional trust, public values, robust governance, high-quality data, digital literacy, and targeted research. Industry perspectives, with a focus on the maritime sector, reinforced that AI's future impact necessitates integrating technical excellence, resilience, human oversight, and continuous adaptation to new risks, with trustworthy AI being key for both safety and business performance in a global, regulated environment.
- Trustworthy AI requires global cooperation on standards, transparency, and supply chains to avoid fragmented, untrustworthy systems.
- AI is now core to foreign, security, and trust policy, not just innovation or economic growth.
- Concentrated AI supply chains and foundational model control create strategic and trust-related vulnerabilities.
- Regulation is mostly national, but AI's impacts—including on democracy and elections—are inherently global.
- Norway's public sector AI framework includes six pillars: trusted institutions, public values (equality, fairness, accountability), robust governance, high-quality public data, high digital literacy, and targeted multidisciplinary research.
- Establishment of a Norwegian government AI unit to accelerate responsible AI development and use.
- The maritime industry is using AI to drive efficiency and decarbonization; 30,000 vessels (a third of the global fleet) have connected technologies from Kongsberg Maritime.
- Autonomous and remotely supported vessels are emerging trends, requiring robust digital integration, cyber security, and human-AI trust.
- Emphasis on continuous stakeholder engagement, including research, public-private collaboration, and user education.
- Warning that if AI development becomes a purely geopolitical contest, trust will fragment into regional blocks, undermining global AI reliability.
Regional Leaders Discuss AI-Ready Digital Infrastructure
The session at the India AI Impact Summit 2026 brought together global and regional stakeholders to discuss the foundational requirements and key challenges for realizing AI's full potential, particularly across the Global South. The panel touched on four interconnected pillars: AI-ready data infrastructure, the critical role of skills and standards, the importance of locally relevant data, and strategic investments in digital infrastructure. Key insights included the need for discoverable, high-quality, interoperable, and accessible data to underpin AI development; the major investments in AI compute and data centers across emerging economies such as Uzbekistan and Indonesia; and the focus on balancing foundational infrastructure with service-level AI applications, especially in high-impact sectors like agriculture and healthcare. Concerns about the energy intensity of current AI models were raised, alongside opportunities for leapfrogging through innovative approaches to infrastructure. The conversation also underscored the urgency of addressing the AI skills gap and ensuring equitable access to technologies, while harnessing international collaborations and national programs to build capacity and inclusivity. All panelists agreed that human-centric and welfare-oriented AI development, grounded in sound strategy and investment, is core to maximizing AI's developmental impact.
- Emphasis on making data AI-ready through four pillars: discoverability, trustworthiness (quality assessment), interoperability (unique identifiers), and usability (standards/classifications).
- Development of a quality assessment framework for data credibility and shared metadata standards for data discoverability.
- Uzbekistan announced a $200 million government data center powered by supercomputers from Nvidia and a $5 billion energy-efficient data center project with Saudi Arabia's Data World, aiming to operationalize within 2-3 years.
- Uzbekistan launched the '5 Million AI Leaders' program with UAE partners to build AI skills across all demographics and adopted a national AI Strategy 2030.
- WTO highlighted that AI-driven trade could grow global trade by nearly 40% by 2040 ('40 by 40 effect'), provided infrastructure, skills, and policy readiness are addressed.
- Indonesia identified a 'triple deficit': data and compute infrastructure, localized AI-ready data, and AI talent shortages; initiatives underway via a national AI roadmap and talent development.
- Asian Development Bank (ADB) emphasized the need to balance foundational infrastructure investment with direct AI applications in key sectors, supporting ground-level impact in large economies like India.
- Concerns raised about the high energy and compute costs of current AI infrastructure, calling for innovative and more efficient approaches, as highlighted by references to earlier summit discussions.
- Broad agreement on the need for human-centered, inclusive, and welfare-driven AI strategies supported by coordinated national and international initiatives.
Aligning AI Governance Across the Tech Stack | ITI C-Suite Panel
The session at the India AI Impact Summit 2026 convened senior leaders from major technology companies—Zscaler, Zoom, Amazon, and DeepL—to dissect the global AI governance landscape. Panelists underscored the necessity for international alignment on AI policy to foster innovation, enable interoperability, and manage risk, while warning against over-regulation that could stifle growth. They highlighted the challenges arising from fragmented national and regional regulations, such as inefficiencies, increased costs, and barriers to scaling global AI solutions. All agreed on the need for a ‘principles-based’ approach that harmonizes basic frameworks around privacy, security, and trust, while allowing room for national sovereignty and flexibility. Security and trust were emphasized as foundational elements, with the recognition that users and enterprises must also share responsibility for ethical AI deployment. The session crystallized a consensus on pursuing balanced, adaptive governance and cross-sector collaboration as essential to unlocking AI’s benefits for society and the global economy.
- Consensus among tech leaders on the need for international alignment in AI governance to support innovation and interoperability.
- Over-regulation is cautioned against, as excessive compliance can hinder innovation and create inefficiencies, particularly in global operations.
- Fragmented regulatory environments—such as varying U.S. state laws and swift, sometimes premature, regulations in places like Colorado and the EU—lead to 'buyer’s remorse' and implementation hurdles.
- Panelists advocate for a balanced framework with commonly understood norms and values, rather than rigid uniformity.
- Security and trust are considered indispensable, with risks like data poisoning and uncontrolled AI use highlighted as central governance concerns.
- Recognition that sovereignty is important, but unchecked restrictions on data flow impede economic progress and technological advancement.
- Responsibility for secure, ethical AI lies not only with governments and corporations but also with users and non-technical enterprise roles.
- Economies of scale and effective R&D in AI depend on transparent, globally consistent regulatory approaches.
- Calls for regulatory focus on high-risk use cases and harms evident today, rather than speculative or all-encompassing frameworks.
Setting the Rules: Global AI Standards for Growth and Governance
The panel discussion at the India AI Impact Summit 2026 centered on the critical role of AI standards in enabling responsible and trustworthy AI deployment across global industries. Leaders from technology firms, international standards bodies, and government agencies emphasized the need to create a unified language and measurable benchmarks for defining 'what good looks like' in AI. Key themes included the challenges of achieving global cooperation, the lag between regulation and standards, the necessity for transparent testing and reporting frameworks, and the importance of including diverse stakeholders in setting these standards. The dialogue highlighted the urgent drive to formalize best practices for risk management, foster interoperability, and ensure both consumer trust and ease of innovation as AI systems become increasingly integral to all sectors.
- Strong consensus that global cooperation is essential for effective, inclusive AI standards that benefit people, planet, and prosperity.
- Industry leaders (Microsoft, OpenAI, Google DeepMind, Qualcomm) and policymakers are prioritizing alignment on testing, transparency, disclosure, and incident reporting as foundational standardization areas.
- Benchmarks and formal measurement methodologies (as pushed by ML Commons) are a critical step to validating claims about AI functionality, risk, and behavior.
- Regulation is moving ahead of technical and process standards, creating both pressure and opportunity for rapid standards development in areas like risk management.
- Standard setting is not just an industry endeavor; multiple stakeholders, including governments and consumers, need to participate to legitimize and align on what constitutes 'good enough'.
- Key policy bodies, such as Singapore’s AI governance unit and India's Bureau of Indian Standards, are focusing on global alignment and practical verification processes to advance trust and market adoption.
- A universal, transparent language for risk management and safety practices across the AI supply chain is seen as vital for adoption and interoperability.
- Standards are needed to address not just existing risks but also unique, emerging risks associated with frontier AI models and agents.
- There is recognition that standards need to be accessible and practical for engineering and product teams at scale.
- The evolution and maturity of AI standards require active dialogue between standard setters, regulators, industry practitioners, and academia.
Responsible AI for Shared Prosperity
This session at the India AI Impact Summit 2026 highlighted transformative, globally coordinated efforts to build inclusive, locally relevant, and equitable AI systems across Africa and Asia. Key announcements included major investments in multilingual AI tools and infrastructure such as an African languages AI initiative targeting over 40 languages, the establishment of Africa's first public sector AI compute cluster at the University of Cape Town, and the launch of the Asia AI for Development Observatory. Partnerships span governments (UK, Germany, Japan, Sweden, Canada), foundations (Gates, GSMA), and community-driven hubs such as Masakani and Wadwani AI. Leaders discussed the crucial importance of representation, data, talent development, local innovation, and gender-responsive programming. In both continents, emphasis was placed on reflecting local languages and cultures in AI models and developing practical applications in health, agriculture, and education. The summit underscored a 'force for good' vision: supporting AI ecosystems that empower historically underserved populations, preserve cultural and linguistic diversity, and drive socioeconomic development, particularly through responsible AI governance and capacity building.
- AI for Development program is funding more than 40 African languages, ensuring local linguistic access.
- Africa's first public sector AI compute cluster is being installed at the University of Cape Town to empower local researchers.
- Asia AI for Development Observatory launched to support responsible AI governance in the region.
- Partnerships involve UK, Canada’s International Development Research Centre, Gates Foundation, Governments of Germany, Japan, Sweden, and the GSMA Foundation.
- Four additional startups in Asia and Africa to receive support, including TORN AI (Morocco) providing voice interfaces in local dialects.
- Masakani African Languages Hub aims to impact 1 billion Africans via relevant AI tools centered on 50 key languages.
- 40% of Masakani's funding dedicated to use cases with a special focus on women's economic empowerment through Project ECHO.
- Wadwani AI in India works across health, education, and agriculture, running solutions in 16+ languages, with practical deployments in disease surveillance and reading fluency for children.
- Significant need for investment in data, compute, talent, and sustained, ecosystem-level capacity building.
- Strong emphasis on safeguarding representation, cultural preservation, and gender equality within AI development.
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance | India AI Impact Summit 2026
The panel discussion at the India AI Impact Summit 2026 brought together top executives and experts from companies including Zscaler, Amazon, Zoom, and DeepL to deliberate on international AI governance. The central theme was the tension between the necessity for global coordination in AI regulation and the risks inherent in fragmented or overbearing national frameworks. Panelists highlighted the importance of cross-border data flows, the dangers of excessive or premature regulation that could stifle innovation, and the essential role of trust, security, and responsible adoption at both the government and enterprise levels. While recognizing countries’ rights to sovereignty, the speakers consistently advocated for a balanced, harmonized approach to AI regulation grounded in shared principles, pragmatic risk assessment, and robust security overlays. They emphasized the collaborative responsibilities of governments, enterprises, and users in ensuring AI’s positive impact and warned against creating regulatory barriers that could impede economic progress or technological adoption—especially in the context of global supply chains, communication, and data-driven services.
- Global leaders from Zscaler, Amazon, Zoom, and DeepL called for alignment in AI regulation to avoid a patchwork of country-specific rules that frustrate innovation and increase compliance burdens.
- Over-regulation and premature policies—before fully understanding AI’s real-world impacts—were consistently identified as threats to innovation and competitiveness.
- Jay Chowdhury emphasized that while governance is necessary, excessive control can 'kill innovations,' advocating for balance.
- Aparna Bawa (Zoom) highlighted the critical need for unencumbered cross-border data flows, warning that fragmented policies can impede individuals’ and countries’ progress.
- David Zapolski (Amazon) cited practical examples, such as Colorado’s paused AI regulation and EU hesitation, to illustrate the global uncertainty resulting from rushed policy-making.
- Panelists agreed that risk-based, harm-driven approaches—identifying and targeting high-risk AI uses for oversight—are preferable to blanket regulations.
- Dr. Yarik Coutilowski (DeepL) and others supported a framework with transparent, globally consistent norms—allowing for some national flexibility, especially around privacy, but avoiding excessive divergence.
- The necessity of embedding robust security at every layer of AI systems was underscored to ensure both sovereignty and resilience to abuse like data poisoning.
- Beyond government, enterprises and end-users were recognized as having shared responsibility in ensuring responsible and secure AI deployment.
- The summit notably marked the first such detailed global AI governance discussion taking place in the Global South, highlighting a push for inclusive, international consensus.
From KW to GW: Scaling the Infrastructure of the Global AI Economy
The session at the India AI Impact Summit 2026 featured leaders from Google, IRCTC, Bat GPT, Vertive, and Nvidia, focusing on India's drive for AI sovereignty, innovation, inclusivity, and infrastructure scale. Panelists highlighted India's rapid adoption of AI for societal and business advancement, Google's investment in indigenous data centers and tools supporting data residency and security, and IRCTC's use of AI and ML to combat fraud and optimize ticketing during peak demand. Bat GPT distinguished itself through purpose-driven, domain-specific models built collaboratively with Indian enterprises. Google underscored their push for inclusivity, providing free AI-powered educational tools to address the digital divide. In the following fireside chat, Vertive and Nvidia spotlighted a paradigm shift in data center design—moving from cloud-era, grid-first planning to GPU-first, workload-driven deployments, advocating rapid, pod-based deployment, and emphasizing the need for scalable, transparent AI “factories.” The speakers reinforced that collaboration, indigenous infrastructure, and innovative deployment models are critical to sustaining India’s position as a global AI powerhouse.
- India aims for complete AI sovereignty, focusing on local control of both platforms and hardware.
- Google is expanding indigenous data centers in India (notably in Weisac) to ensure data residency and support sovereign AI.
- Google launched a new 'indigenous data box' for secure, on-premises AI processing powered by Gemini.
- IRCTC leverages advanced and indigenous AI/ML to manage peak load, mitigate automated abuse, and collaborate with Indian startups for social data analysis.
- Bat GPT develops domain-specific AI models in partnership with Indian enterprises, emphasizing purposeful, trustworthy AI tailored for Indian contexts.
- Google is offering free Gemini-based mock tests for students, promoting digital inclusivity and bridging the rural-urban divide.
- Vertive and Nvidia advocate a shift toward 'inside out' data center designs—starting with GPU workloads, not traditional grid-oriented approaches.
- Pod-based GPU deployment is championed for speed, scalability, and reproducibility, enabling rapid expansion of AI capability.
- Both Vertive and Nvidia stress the global scale of AI demand, the necessity for transparent capacity planning, and treating AI infrastructure as 'factories' for continual innovation.
NextGen AI: Mastering Technical Excellence with Ethical Integrity
The session at the India AI Impact Summit 2026 featured a distinguished panel comprising academic leaders, government officials, AI entrepreneurs, and talent strategists, focusing on defining 'NextGen AI' and strategizing for skill development and talent gaps in India. Panelists collectively stressed the importance of critical thinking, domain expertise, foundational technical knowledge, real-world problem solving, and ethical judgment as essential qualities of next-generation AI talent. The discussion also emphasized the transition from a focus on using AI tools to understanding their underlying principles, the necessity for constant upskilling, and the criticality of addressing domain-specific and regulatory knowledge. From a telecom perspective, the shift towards AI-native infrastructure—especially with the advent of 6G—demands a transformation in both skills and standards, underlining AI's centrality in upcoming technological ecosystems. The democratization of AI via vernacular languages and inclusion, along with a focus on practical applications for social and business impact, were highlighted as paths forward for India's AI journey.
- Panelists included leaders from IIT Patna, Department of Telecommunications, Tatras, Sabud Foundation, Vinces IT, and Mount Talent Consulting, offering perspectives from academia, government, entrepreneurship, and talent development.
- NextGen AI talent needs critical thinking, risk-taking abilities, foundational understanding of AI, domain specialization, and the ability to question AI outputs rather than blindly trusting them.
- There is an emerging gap between superficial AI tool proficiency and a deeper mastery of core concepts; panelists advocated for educational reforms to emphasize foundational and creative problem-solving skills.
- Ethics and regulations are becoming as vital as technical skills: future AI professionals must stay updated on sectoral regulations and ethical frameworks.
- AI’s role in telecom standards is expanding: With 6G, AI will shift from an 'add-on' in 5G to being natively integrated at the component level, requiring new standards and skills for telecom engineers.
- AI’s democratizing potential can be harnessed through vernacular interfaces and broader inclusivity, helping bridge the digital divide and empower non-English speakers.
- Success in the AI workforce will come from T-shaped professionals—deep expertise in a domain, with breadth across hardware, software, and ethical considerations.
- Industry-academia collaborations, such as AI upskilling programs, are necessary to fill the ever-widening talent gap especially in real-world application domains.
Fireside Chat: The Future of AI & STEM Education in India
The opening session of the India AI Impact Summit 2026 showcased a diverse and high-profile panel comprising policymakers, academic leaders, industry experts, startup leaders, and government officials. The keynote by Shri Narendra Bhushan, Additional Chief Secretary of Uttar Pradesh, emphasized the urgent need for the Indian education ecosystem to evolve at the pace of AI advancements, especially in STEM fields. He highlighted the scale of India’s output—over 3 million STEM graduates annually—and discussed key policies such as NEP 2020, Digital India, and Start-up India that have laid a solid policy foundation. Uttar Pradesh’s significant AI education initiatives, such as the creation of a Virtual AI University (initial budget of over 100 crores), an experiential Industry 4.0 project in 150 polytechnics (worth 7,000 crores), teacher upskilling through smart classroom initiatives like ‘Thursday Connect’, and device distribution to 8 million students annually, were detailed. Bhushan called for ongoing curriculum reforms, teacher empowerment, deeper industry-academia partnerships, and a shift in focus from mere AI adoption to developing critical, ethical, interdisciplinary thinkers. Dr. Rajkumar of OP Jindal University built on this by stressing the importance of including liberal arts alongside STEM (turning STEM to STEAM), reimagining university governance and career services in light of rapidly automated jobs, and prioritizing uniquely human skills. The session set an aspirational tone for integrating AI responsibly and equitably into education to prepare Indian youth for the future of work.
- Over 3 million STEM graduates produced annually in India, with 1.5 million engineers.
- Major education policy alignments supporting AI integration: NEP 2020, Digital India, Startup India.
- Uttar Pradesh to establish a Virtual AI University with an initial investment of more than 100 crores (hub and spoke model).
- Industry 4.0 experiential learning project in 150 polytechnics worth 7,000 crores (in partnership with the Tata consortium).
- Annual distribution of around 8 million tablets or mobile devices to new higher education students in Uttar Pradesh.
- 'Thursday Connect': State initiative for regular AI education sessions and teacher upskilling in smart classrooms.
- IMF and World Economic Forum predict over 40% of jobs/skills will be impacted by AI.
- Need for frequent curriculum revision, embedded mentorship, and real-world project-based learning.
- Emphasis on teaching systems thinking, ethical reasoning, communication, and AI literacy.
- Calls for broadening STEM to STEAM (including liberal arts and humanities) in curriculum redesign.
- Commitment to responsible AI deployment, addressing digital divide, bias, inclusion, and equity.
India’s AI Leap: Policy to Practice with AIP2
The session at the India AI Impact Summit 2026 focused on the practical diffusion of AI from high-level ambitions to on-the-ground results, with India positioned as a global leader in inclusive, human-centric AI deployment. Highlights included the launch of the Global South AI Diffusion Playbook, designed as an implementation guide spanning infrastructure, data and trust, institutional procurement, market access, and skills development. Initiatives emphasized bridging the digital divide — through platforms like Bashini (offering services in 22 languages) and robust school connectivity efforts (Giga Initiative) — and nurturing an AI-native startup ecosystem, notably via the MeitY Startup Hub, which has supported over 6,000 startups and manages significant government and private funding pools (up to INR 8,000 crores through the India AI mission). The summit underscored the importance of interoperability, standards, skilling, and responsible AI, as articulated through multistakeholder collaborations and standards databases (over 850 AI-related standards published by ITU, ISO, IEC) to build trust and combat challenges such as deepfakes. European perspectives offered by MEP Brando Benfi highlighted global alignment with regulatory frameworks such as the EU AI Act and stressed the necessity of trust and AI literacy to enable widespread adoption across public, private, and civil society actors. Startups were repeatedly positioned as the vital link connecting technological capability with meaningful, scalable, and responsible real-world impact.
- The Global South AI Diffusion Playbook was launched, providing a five-dimensional implementation guide covering infrastructure, data/trust, institutional procurement, skills, and market shaping.
- India's Bashini platform now delivers government services in 22 languages, setting an example of linguistic inclusivity in AI-powered digital public infrastructure.
- The Giga Initiative, in partnership with UNICEF, aims to connect every school to the internet globally; current commitments stand at $80 billion towards a target of $100 billion.
- India's MeitY Startup Hub has supported over 6,000 deep tech startups, facilitating mentorship, market access, and government/private funding with up to INR 8,000 crore allocated by the India AI mission.
- A coalition of 70+ partners under ITU offers over 180 learning resources in 13 languages for AI skills development.
- The ITU, with ISO and IEC, has published an AI standards exchange database containing over 850 standards and technical publications, including multimedia authenticity standards to combat deepfakes.
- The EU AI Act was cited as a practical regulatory framework focusing on high-risk areas and enabling global reference for responsible AI adoption.
- Startups were recognized as essential transmission mechanisms to drive AI innovation from labs to large scale market deployment, especially in supporting SMEs and large enterprises to adopt AI responsibly.
- The summit called attention to avoiding a digital divide becoming an 'AI divide,' emphasizing infrastructure, skilling, standards, and inclusive policy as critical enablers.
The Agent Universe: From Automation to Autonomy
The session focused on the transformative impact of hyper-automation and AI in the Banking, Financial Services, and Insurance (BFSI) sector, featuring leaders from Netwest and Signet. The panel discussed real-world advances in implementing AI and hyper-automation at enterprise scale, highlighting productivity gains, improved customer experiences, and robust risk management. Netwest detailed their rollout of AI tools—including Microsoft Copilot and LLMs—to 60,000 employees and the integration of intelligent assistants serving 20 million customers, which resulted in a substantial shift from traditional automation to generative AI-led journeys. Productivity among relationship managers increased by 30%, and 35% of code is now AI-generated. The bank has made responsible AI a priority with comprehensive training programs, robust governance, and dedicated leadership roles for both responsible AI and AI research. Signet’s CTO and chief AI officer shared insights from building national-scale digital systems, emphasizing the relentless focus on compliance, data integrity, explainability, and strong architectural principles. Their hybrid approach leverages AI for interaction layers while relying on trusted systems like Excel for mathematical accuracy. Both institutions underscored the need for zero-trust security, modular architectures, and continuous customer feedback, emphasizing that trust, governance, and future workforce readiness are foundational for sustainable AI integration in regulated environments.
- Netwest has deployed AI tools such as Microsoft Copilot and custom LLMs to 60,000 employees across the organization.
- Their AI-powered assistant, Kora, now handles 12.9 million customer interactions and serves 20 million customers.
- The number of customer journeys managed by generative AI grew from 4 to 11 in the past year.
- 100% of complaints in Netwest’s commercial and institutional business are now processed with AI summarization.
- Private banking relationship managers are 30% more productive thanks to AI augmentation.
- 12,000 engineers at Netwest use AI tools for coding, with 35% of all code being AI-generated.
- 58,000 Netwest employees completed mandatory AI/data ethics training; responsible AI policies govern all deployments.
- Netwest appointed both a Head of Responsible AI and a Chief AI Research Officer in 2024-2025.
- Signet CTO highlighted the importance of curated data, explainability, audit trails, and zero-trust security in AI systems.
- Signet’s hybrid approach: AI manages contextual/human interaction, while tools like Excel maintain mathematical accuracy.
- Signet processes up to $60 billion yearly on GEM and manages 10-12 billion transactions monthly on the GST platform.
- Compliance by design and tight architectural controls are non-negotiable for AI in regulated domains.
Building Public Interest AI: Catalytic Funding for Equitable Compute Access
The session at the India AI Impact Summit 2026 focused on the urgent need to democratize access to AI compute infrastructure, positioning India as a global leader in public interest AI. Speakers emphasized that the digital divide has evolved into a compute divide, where equitable access to powerful computational resources—like GPUs and cloud capacity—will dictate who shapes the future of AI. Through the India AI mission and a compute capacity plan mobilizing over 38,000 GPUs as public infrastructure, India is setting a precedent for large-scale, public AI ecosystems anchored in openness and sovereign capability. The launch of the 'MERI' (Multi-stakeholder AI for Trusted and Resilient Infrastructure) collaborative platform was announced to facilitate shared access to compute, data, and AI models as digital public goods, accommodating diverse national contexts. The working group, co-led by India, Kenya, and Egypt, identified foundational pillars of compute, capability, collaboration, connectivity, compliance, and context. The session called for shifting from analysis to concrete action, with a strong philanthropic and South-South cooperation angle, to ensure inclusive, globally competitive, and locally grounded AI ecosystems. The upcoming working version of the key report 'Opening Up Computational Resources for New AI Futures' was released for public feedback, highlighting India’s operational and scalable approach to AI democratization.
- India is deploying over 38,000 GPUs as public infrastructure under its AI mission, marking one of the world's largest public compute initiatives.
- The 'compute divide'—inequitable access to computational resources—is replacing the traditional digital divide, making infrastructure a central AI policy issue.
- The MERI platform (Multi-stakeholder AI for Trusted and Resilient Infrastructure) was launched to offer voluntary, modular, and customizable compute and data access as digital public goods.
- Open-source models, access to relevant datasets, and robust yet adaptable governance are key focuses for enabling locally grounded and globally competitive AI innovation.
- Six foundational pillars were identified: compute, capability, collaboration, connectivity, compliance, and context.
- India, Kenya, and Egypt co-chaired the working group on democratizing AI resources, embodying a South-South partnership model.
- The session included the release of the report 'Opening Up Computational Resources for New AI Futures' for stakeholder consultation.
- Philanthropic organizations, especially the Rockefeller Foundation, are urged to play a catalytic role in unlocking capital, reducing risk, and facilitating partnerships.
- Rather than rationing, India advocates for intelligent prioritization to maximize the public utility of compute resources, especially for innovation and research.
Building Climate-Resilient Systems with AI
The session at the India AI Impact Summit 2026 brought together an eminent global panel to address the dual challenge of promoting economic development and achieving climate sustainability through the application of artificial intelligence (AI). The panel highlighted the growing urgency of climate change and the transformative potential of AI to drive decarbonization and adaptation across sectors. Central to the discussions was the Green Artificial Intelligence Learning Network (Grail), a collaborative, not-for-profit initiative aiming to converge AI, academia, industry, and governments to create scalable solutions for climate mitigation and adaptation. The panel recognized key barriers such as data shortages and workforce gaps, outlined recent collaborative achievements—including an online platform, active government engagement, and multi-sector taxonomies—and showcased landmark research that distilled actionable recommendations for leveraging AI to reduce global greenhouse gas (GHG) emissions. Significant new partnerships involve major industrial consortia representing a substantial share of global GHGs, with a concerted focus on rapid deployment and impact. The session concluded with an invitation for radical, action-oriented collaboration to harness AI’s exponential advances in service of the planet's most pressing existential crisis.
- Launch and promotion of the Green Artificial Intelligence Learning Network (Grail), a not-for-profit based in London focused on AI-enabled climate solutions.
- Grail fosters collaboration between academic, commercial, industrial, governmental, and philanthropic entities to scale AI-driven climate mitigation and adaptation initiatives.
- Recent summit in London gathered 200 attendees, 115 organizations, and 60 speakers to strategize AI applications for power, building materials, and carbon markets.
- Grail's online collaborative platform is now open for global participants to co-create climate solutions.
- Collaboration with the World Business Council involving 250 companies responsible for 26% of global GHG emissions and 24% of world revenues, aiming to scale AI-powered decarbonization.
- Partnerships with international energy coalitions—for example, UNESA with 71 companies and a goal to double clean power capacity to 1,500 GW by 2030—with AI as a central enabler.
- Recent landmark study, led by Professor David Sandalow and supported by the Japanese government, reviewed strategies for deploying AI to reduce greenhouse gases, publishing 17 actionable chapters available freely online.
- Grantham Institute quantified 0.5–1.4 gigatons of possible AI-related GHGs from data centers but found net benefit of 3.5–5.4 gigatons in potential GHG reductions via AI applications.
- AI’s contribution to total GHG emissions is currently less than 1%, with the largest obstacles to scaling impact being data availability and lack of skilled personnel.
"From Data to Deployment: Regulatory Pathways for AI Medical Devices"
MahaAI: Building Safe, Secure & Smart Governance
The session at the India AI Impact Summit 2026 showcased Maharashtra's bold vision and concrete progress in embedding artificial intelligence at every level of governance. The keynote highlighted Maharashtra's emergence as a global 'living laboratory' for AI-driven public service delivery, with partnerships across industry and academia and the state's continued investment in intelligent governance infrastructure. Key initiatives include the AI-powered Mahak Crime OS for transforming law enforcement, capacity-building programs ensuring public sector stakeholders are AI-ready, and innovative efforts like the Maha GPT small language model for searching government orders. The summit presented impressive results, such as the Maharashtra Cyber Security Project that froze over ₹1,000 crore from cybercriminals and saved at least 70 lives through AI-enabled rapid intervention within six months. The discourse reinforced the dual imperative of globally interoperable, risk-based governance frameworks rooted in human-centered principles, and robust cybersecurity in the face of emerging threats like quantum computing. Across contributions, the session called for a shift from fragmented or static regulation to agile, coordinated, and value-driven policies to ensure AI delivers inclusive prosperity and public trust.
- Maharashtra positions itself as a 'living laboratory' for AI governance, partnering with leading global technology firms.
- AI-powered Mahak Crime OS, developed with Microsoft, has overhauled crime prevention and investigation with faster response and transparent processes.
- State-run digital agency Mahai has built a modular, cloud-native, API-driven AI governance infrastructure spanning recruitment, urban planning, disaster management, and smart mobility.
- The Maharashtra Cyber Security Project utilizes AI tools and consultants and has, in under six months, frozen over ₹1,000 crore in illicit funds and saved more than 70 young women from cyber-extortion-driven suicide attempts.
- A new Maha GPT (small language model) project, in partnership with IIT Mumbai, will allow citizens and officials to query and interpret complex government orders and Supreme Court judgments.
- Capacity-building efforts feature tailored government staff training through AI university programs and online courses to ensure AI literacy.
- State Data Authority initiatives focus on making large-scale, valuable Indian data beneficial for local economic and health system outcomes rather than foreign extraction.
- Panelists emphasized the governance paradox: the need for intelligent, adaptive, risk-based regulations that foster both innovation and public safety, moving towards global standards and cooperation.
- Cyber security, threat intelligence, and rapid AI-enabled response capabilities are central, with over 1 million attempted cyberattacks during a recent crisis being thwarted by tools like Luminar, Cognite, and Pathfinder.
- Policymakers highlighted emerging risks from quantum computing, which could break even the strongest current encryptions, indicating the next frontier in AI and cybersecurity defense.
- Summit attendance included over 20 heads of state, 60 ministers, and hundreds of global AI leaders, signaling high-level international commitment.
AI Safety at the Global Level: Insights from Digital Ministers & Officials
The session at the India AI Impact Summit 2026 focused on the critical need for independent, science-based assessment of AI capabilities and risks to inform global policy. Panelists emphasized the rapid evolution and increasing autonomy of AI systems, especially multi-agent models that can interact without direct human oversight, raising new and under-explored safety and trust issues. Singapore showcased its proactive approach to AI governance, both domestically through legislative action and regionally via ASEAN collaboration. The latest AI Safety Report, developed collaboratively with global expertise, was highlighted as a foundational 'ground truth' resource, bridging the gap between scientific understanding and policy formulation. Experts advocated for evidence-informed foresight and responsible regulation, emphasizing that while the report stops short of recommending specific policies, it empowers policymakers to craft robust, context-appropriate guardrails. Notably, the panel stressed the urgency of addressing the convergence of cybersecurity, biosecurity, and autonomous agents, and called for international cooperation and ongoing research to keep pace with emerging threats and challenges.
- AI Safety Report launched as an independent, scientific assessment to inform global policymakers on AI risks and mitigation strategies.
- Rapid increase in AI agency and autonomy in 2026 versus 2025, notably with multi-agent systems capable of prolonged, unsupervised operations.
- New Singapore legislation imposes statutory obligations on service providers to remove AI-generated harmful content, especially targeting women and children.
- Singapore stresses the need for thoughtful, targeted regulation to ensure effective protection without stifling innovation or creating false security.
- Panelists highlight insufficiently studied risks where independent AI agents interact, potentially compounding systemic vulnerabilities.
- The AI Safety Report incorporates OECD scenario planning and aims to provide real-time, evidence-based information beyond journalistic anecdotes.
- Growing recognition of AI's dual role as both a cybersecurity threat and target, with increasing intersection with biosecurity concerns.
- Call for the establishment of new democratic institutions and funding to bolster scientific evidence and international regulatory coherence.
- Research priorities include understanding the risks from the confluence of increased autonomy in agentic AI and advanced cybersecurity threats.
German–Asian AI Partnerships: Driving Talent, Innovation & the Future of Work
The session at the India AI Impact Summit 2026 focused on the critical need for international cooperation, especially between Germany and India, to ensure that the benefits of AI reach all sectors of society—particularly small and medium enterprises (SMEs), which are the backbone of both economies. Panelists emphasized a shift from merely developing AI technologies to deploying them effectively and inclusively, tailoring policies and education to drive workforce readiness and economic productivity. Concrete steps highlighted include expanding AI and technology training across higher education and vocational systems in India, deepening industry-academia collaboration, and embedding real-world industry exposure within academic curricula. Germany outlined its commitment through initiatives like AI living labs that link university students with SME-driven innovation. Both countries acknowledged the persistent concern over job displacement and the importance of proactive reskilling and policy frameworks to ensure technological transitions are human-centric. The session underscored the imperative of fostering accessible innovation ecosystems, trust, open data infrastructures, and pragmatic regulatory approaches to bridge global digital divides. Industry leaders called for greater originality and practical AI skills among graduates entering the workforce, highlighting the growing importance of integrating generative AI and productivity tools into everyday business and education. The collaboration serves as a template for scalable, impactful partnerships that both future-proof economies and prioritize social inclusion.
- Strategic shift from AI development to effective, responsible deployment, emphasizing SMEs’ needs.
- German-Indian partnership foregrounded as a cooperative model for inclusive AI adoption and global digital transformation.
- Establishment of 'AI living labs' at Indian universities (Ratandata University, Mumbai) to integrate students and SMEs into practical AI innovation.
- India’s National Education Policy 2020 embeds skill-focused AI training across higher education, including humanities, with 50% of university courses emphasizing skills.
- Expansion of research parks in Indian premier institutions—including IITs—from 3 to 9, with plans for further growth.
- Mandatory inclusion of internships, apprenticeships, and industry assessment within Indian degree programs to enhance workforce preparation.
- Bilateral cooperation on AI courses and open-source initiatives, with specific mention of Germany’s support for equitable access and environmental sustainability.
- Panelists addressed fears of job losses by reaffirming policies for seamless, equity-driven workforce transition and reskilling.
- Call for policies supporting open data, climate-friendly computing, and regulatory frameworks to facilitate inclusive, sustainable AI adoption.
- Industry representatives highlighted the need for graduates with genuine, original AI competency, not just familiarity with generative AI tools.
Scaling Trusted AI: How France and India Are Building Industrial & Innovation Bridges
The session at the India AI Impact Summit 2026 highlighted the deepening collaboration between France and India in artificial intelligence, technology, and innovation. French and Indian dignitaries, including Prime Minister Modi and President Macron, recognized numerous partnership agreements spanning AI, quantum computing, cloud infrastructure, security, healthcare, space technology, and more. Over 100 French companies participated, emphasizing diverse sectors like quantum photonics, cybersecurity, digital twins, and green tech. French Tech leaders underscored the rapid evolution of France’s startup ecosystem, now ranked among the globe's top three AI hubs alongside San Francisco and New York, and showcased Europe's leading AI firms such as Mistral AI and H Company. Indian leaders stressed the need for 'trusted AI' as the core enabler of large-scale adoption and innovation. The tone was notably optimistic, with a focus on the scale brought by India and the deep-tech excellence of France as highly complementary. The session also introduced a panel of international leaders from key technology firms, who discussed scaling AI solutions while embedding trust at every layer—deeming trust foundational, not optional. The summit was noted as a milestone in forging cross-border commercial ties, facilitating critical investments, and accelerating real-world, impactful innovation serving both economies and broader humanity.
- France sent a large delegation of over 100 companies across sectors like quantum, cybersecurity, mobility, green tech, etc.
- Key partnership announcements: Zasia Technology-GT Solved (engineering automation in AI design), Exotrail-DUVA Space (contract for 14 satellite propulsion systems), H Company-St. James Hospital (healthcare AI), North Friends Invest-TAB (connecting industrial ecosystems).
- French Prime Minister Macron and Indian Prime Minister Modi attended and endorsed the summit.
- French startup ecosystem now cited as third globally in AI, after San Francisco and New York, achieving accelerated growth post-Paris AI Summit.
- French AI leaders like Mistral AI, H Company, Agrico, Watlab Genomics, Candela, highlighted as innovators.
- Emphasis on creating 'trusted AI' as foundational for mass adoption: trust must be built-in, not added later.
- India highlighted as an unparalleled scale market: 1.4 billion people, 200,000 startups, with engineering talent unmatched globally.
- Strong focus on collaborative innovation with shared values: trustworthy AI, low environmental footprint, positive societal impact.
- Panel of high-level executives from Tata Communications, Candela, Tales, HCL Technologies, Dassault Systems, among others.
- Summit co-organized by Business France, La French Tech, and supported by major sponsors including Capgemini, Schneider Electric, BNP Paribas, and others.
How Multilingual AI Bridges the Gap to Inclusive Access
The session at the India AI Impact Summit 2026 showcased advancing collaboration between Switzerland and India in AI and research, emphasizing multilingual and inclusive AI development as a democratic imperative for global digital participation. Switzerland announced the launch of three new joint Indo-Swiss research calls focusing on geosciences, social sciences, and One Health, marking a significant strengthening of long-standing bilateral ties. The event highlighted Switzerland’s commitment to open, equitable AI through initiatives like the open-source multilingual model Apertus and the Bashini project in India, targeting AI-driven language solutions across India's major languages. The formal launch of the Indo-Swiss research framework program and the introduction of new funding schemes aim to foster ambitious, innovative, and durable international partnerships, with artificial intelligence set as a thematic priority moving forward. Panel discussions further explored the practical societal impacts of these collaborations, particularly how multilingual AI solutions directly empower marginalized and rural communities, enhancing their access to digital services. The session underscored a seamless international arc towards globally coordinated, locally relevant and culturally embedded AI governance.
- Switzerland emphasized multilingual AI access as a democratic imperative for digital participation globally.
- Announcement of Switzerland hosting the Geneva AI Summit in 2027, following the Paris 2025 and India 2026 AI summits.
- Switzerland and India launched three new Indo-Swiss research calls: (1) Geosciences (natural hazards in mountain regions), (2) Social sciences (with the Indian Council of Social Science Research), and (3) One Health (with Indian biotechnology and medical research councils).
- Formal announcement of the Indo-Swiss research framework program, intended for sustained, strategic bilateral research collaboration.
- Introduction of explore, experiment, and expand grants plus increased researcher mobility funding to stimulate new and ongoing collaboration.
- Switzerland reinforced commitment to equitable AI with the open-source Apertus model supporting multilingual applications.
- Highlight of India's Bashini initiative: deploying AI language tools across 22 official languages (expanding to 36+), covering speech recognition, translation, text-to-speech, and data digitization, addressing lack of digital linguistic resources by collective crowdsourcing data.
- Practical deployments demonstrated in agriculture and manuscript digitization, showing real-world impact on farmer advisories and inclusion of tribal languages.
- The session connected European, African, and Asia-Pacific academic institutions via the International Computation and AI Network (ICAN/IAN), with panelists from diverse global research hubs.
- Underlined global momentum and continuity in collaborative AI governance and multilingual inclusivity, bridging cultural and linguistic divides.
Building Sovereign and Responsible AI Beyond Proof of Concepts
The session at the India AI Impact Summit 2026 centered on addressing the infamous 'pilot-to-production gap' in AI projects, emphasizing that only 30% of AI pilots worldwide currently make it into production, largely due to issues of trust and systemic risk. The panel highlighted real-world incidents where AI projects failed because ethical, social, and operational dimensions were either overlooked or inadequately addressed, such as sustainability, sovereignty, governance, and tangible value for end users. To systematically counter these pitfalls, the speakers introduced the 'AI in 4D' framework, which encourages organizations to assess AI initiatives across four critical dimensions: sovereignty (control and ownership), sustainability (environmental impact), responsibility (ethics and governance), and value (real-world benefit). Through interactive examples and audience participation, it was demonstrated that neglecting any of these domains can result in project failure, public backlash, and ultimately, a loss of trust in AI systems. The session concluded that the future success of AI scale-up hinges on integrating these four perspectives into both pilot design and operational rollout.
- Only 30% of global AI projects progress from pilot to production, with trust as a major barrier.
- In December 2025 alone, the OECD AI Observatory recorded 600 unique AI-related incident reports globally, indicating growing risks and public concerns.
- Six common reasons for AI proof-of-concept failures were identified: insufficient planning for adoption and impact, governance failures, value misalignment, sovereignty concerns, sustainability pressures, and poor change management.
- Real-world examples demonstrated failures due to unsustainable compute and water usage, negative societal value (e.g., increased pedestrian risk), lack of sovereignty/control over models, and failure to ensure responsibility (e.g., bias and lack of appeal in social benefit allocation).
- The session proposed the 'AI in 4D' framework, advocating evaluation of AI projects across sovereignty, sustainability, responsibility, and value as essential for trust and scale.
- Strong emphasis on trust, ethical considerations, energy/environmental sustainability, and stakeholder alignment as prerequisites for AI success.
How nonprofits are using AI-based innovations to scale their impact
The session at the India AI Impact Summit 2026 explored the rationale, structure, and early outcomes of a cohort-based program designed to help Indian nonprofits integrate AI into their work. Moderated by Manohar Srikanth, the panel featured leaders from project Tech for Dev, Agency Fund, and several participating nonprofits. Speakers highlighted the unique challenges nonprofits face in AI adoption – especially resource gaps and the lack of engineering and product management capacity. The cohort model, inspired by commercial startup accelerators but tailored for social impact, provided shared technical mentorship, peer learning, and support to overcome these barriers. The program, free for its inaugural group of seven NGOs, was structured as an open, competitive selection with a strong emphasis on organizational buy-in, hands-on mentoring, and collaborative IP development. Early use cases included teacher-support platforms operating on familiar channels like WhatsApp, aimed at enhancing pedagogical strategy at scale. The session emphasized the importance of user-centered design, peer exchange, and resource pooling in catalyzing responsible AI adoption by civil society organizations.
- Project Tech for Dev supported over 200 nonprofits with open-source tech and advisory, highlighting the need for deeper, cohort-based AI capacity-building.
- The 4-month AI cohort program (September–December), anchored by Project Tech for Dev and Agency Fund, brought together 7 selected NGOs to build real-world AI solutions.
- Selection emphasized organizations with clear use cases, resource commitment, and leadership buy-in; the program was free and intended as a pilot.
- Agency Fund provided funding and a shared pool of 10 technical staff (including newly added product managers) to address common resource bottlenecks in nonprofits.
- The model follows the startup accelerator philosophy (e.g., Y Combinator), emphasizing peer learning, iteration, and fitting AI to real social pain points.
- NGO participants developed AI solutions such as WhatsApp-based chatbots to support teachers with personalized, evidence-based pedagogical strategies.
- Cohort-based learning and resource sharing were cited as critical to overcoming capacity gaps that typically hinder civil society technology adoption.
AI and Data Driving India’s Energy Transformation for Climate Solutions
The session highlighted data.org’s role as a global catalyst for building AI and data capacity, particularly through its Capacity Accelerator Network (CAN) that operates across five continents and collaborates with more than 100 partners in India. Focusing on climate resilience, data.org and its partners have undertaken major initiatives to bridge gaps in climate and energy data, emphasizing the need for hyperlocal, interoperable, and actionable information. A spotlight presentation from Artha Global detailed a ground-breaking study on the impact of extreme heat on health and productivity in Delhi, revealing sharp inequalities in heat exposure and adaptation strategies across neighborhoods. The study, based on a large-scale survey, found significant health burdens, unequal access to cooling, and illustrated how green cover dramatically reduces heat exposure. The findings underscore the critical need for granular, neighborhood-level data to inform effective heat action plans and energy grid management for urban India’s climate adaptation. The session concluded with a call for multi-stakeholder collaboration and an upcoming showcase on open data architecture for India's energy sector.
- data.org’s Capacity Accelerator Network (CAN) operates in the US, India, Latin America, Africa, and Asia-Pacific to build the global data and AI workforce.
- In India, data.org works with over 100 cross-sector partners, focusing on the intersection of climate, health, energy, productivity, and livelihoods.
- The 'Climate Verse' initiative aims to unlock climate and energy data and enhance local digital and AI capacity.
- Barriers identified include fragmented data ecosystems, lack of standards, and a dearth of hyperlocal, accessible data, especially in emerging economies.
- data.org conducted 50+ consultations and reviewed 40+ data platforms/tools in India to map gaps and needs.
- Artha Global’s 2024 study surveyed 27,500 Indians (across 20+ states and 490+ assembly constituencies) to assess heat impacts on health and productivity.
- Findings: 45% of respondents reported heat-induced illness in the last month; 2/3 were sick for more than 5 days.
- 76% of Delhi’s population in high or very high heat risk districts; nearly 50% of India’s workforce is employed outdoors.
- Heat adaptation is overwhelmingly private (e.g., air conditioner/cooler use), with over 40% of 'comfortable' residents relying on these, highlighting equity issues.
- Spatial analysis in Delhi showed a 1°C temperature reduction from increasing local green cover from 4% to 10%.
- A 3°C increase in experienced heat leads to a 50% rise in reported work loss, demonstrating substantial economic impacts.
- Calls for neighborhood-level heat action plans and integration of individual behavioral and built environment data for accurate, long-term grid planning.
- Upcoming sessions to explore India's open energy data architecture and related AI use cases, signaling further sectoral digital transformation.
How AI Is Transforming India’s Workforce for Global Competitiveness
The panel discussion at the India AI Impact Summit 2026 focused on the profound workforce transformation being driven by artificial intelligence (AI) across sectors, with particular emphasis on IT, financial services, and comparative global perspectives from the UK. Panelists addressed anxieties and opportunities stemming from rapid AI adoption, noting that the most significant disruption has shifted from IT testing and infrastructure management toward core software engineering. Companies need to prioritize interdisciplinary fluency, system-level judgment, continuous learning, and deep contextual awareness in their workforce. The conversation highlighted a need for a whole-of-country approach in India for training and upskilling, drawing lessons from the UK’s proactive, government-led AI Skills Partnership which aims to train over one million people for AI readiness. The discussion concluded with consensus that success requires iterative, flexible, and collaborative national frameworks combined with redesigned curricula that incorporate both technical and human skills to convert AI-induced anxiety into proactive agency.
- Software engineering now faces greater disruption from AI than IT testing or infrastructure management.
- AI's real workforce value lies in enabling new problem-solving capabilities, not just automating headcount reductions.
- Key required skills include system-level judgment, interdisciplinary fluency, continuous learning mindset, and contextual awareness.
- AI systems drive demand for integrated technical, governance, and people skills — not just coding ability.
- Mastercard emphasized the strategic need to blend engineering with risk, regulation, and user behavior understanding.
- UK government’s AI Skills Partnership aims to train over one million people for AI readiness through conversion and upskilling programs.
- UK's approach involves collaborative, flexible, and iterative frameworks between government, industry, and educational bodies.
- India currently lacks a coordinated, whole-of-government AI workforce strategy, with fragmented efforts across states and organizations.
- Both India and the UK recognize the need to reform school and college curricula to promote continuous, cross-disciplinary, and people-centric skills.
From Technical Safety to Societal Impact: Rethinking AI Governance
This session at the India AI Impact Summit 2026, co-chaired by Professor Virginia Dignam and Gina Matthews of the ACM Technology Policy Council, brought together experts to critically assess the current framing of AI safety. Rather than focusing solely on technical aspects like model alignment or robustness, the session championed a multidimensional approach rooted in governance, multidisciplinary input, and contextual understanding. Dr. Shiman, leading Mozambique's national AI strategy, emphasized the need to craft policy that prioritizes human, social, and institutional impacts, grounded in real-world contexts and supported by policies on data, cybersecurity, and infrastructure. Dame Wendy Hall challenged the lack of gender diversity and called for a new science of 'AI metrology'—systematic social and technical evaluation of AI’s impact—highlighting the risks of exclusion and the limitations of technical fixes alone. Professor Yanis Yanidis stressed separating the safety considerations of AI technology itself from its use, underscoring the critical need to regulate and measure AI’s deployment, especially where human and societal consequences are at stake. The session advocated for an inclusive, evidence-driven, and interdisciplinary approach to AI safety and governance, urging global cooperation and the elevation of marginalized voices in policy and leadership.
- Shifted focus from technical safety (alignment, robustness) to multidimensional, context-sensitive AI governance.
- Mozambique is developing a national AI strategy with UNESCO, emphasizing data policy, cybersecurity, and inclusive governance.
- Recent regulations in Mozambique govern data centers and cloud computing, tying infrastructure to national sovereignty and safety.
- AI policies must involve multidisciplinary stakeholders—including law, social sciences, ethics, education, and affected communities.
- Protection of vulnerable groups (women, children, youth) is highlighted due to disproportionate harms in AI deployment.
- Mozambique is piloting adoption of UNESCO’s AI ethics principles and expects to approve a comprehensive digital transformation strategy within the year.
- Dame Wendy Hall criticized the lack of gender diversity among summit speakers, asserting that true AI ethics require diverse, inclusive participation.
- Called for systematic, longitudinal monitoring and evidence collection on AI’s societal impact, referencing global experiments on youth and social media.
- Introduced the concept of 'AI metrology' or AI measurement—the social and technical science of evaluating AI effects—and announced an upcoming ACM journal dedicated to the field.
- Professor Yanidis distinguished between regulating AI technology and its use, advocating for strict evaluation and potential regulation of real-world deployments.
Democratizing AI: Building Trustworthy Systems for Everyone
The session at the India AI Impact Summit 2026 featured high-level discussions on global collaboration and the challenges of scaling AI adoption, especially in the Global South. Panelists from sectors including Microsoft's Office of Responsible AI and ML Commons highlighted the necessity of effective governance, robust infrastructure, skills development, culturally relevant AI, and open benchmarking. Microsoft announced a landmark $50 billion investment toward facilitating AI access in the Global South by 2030, focusing on five pillars: infrastructure, skilling (notably educating two million Indian teachers), multilingual and multicultural AI initiatives, local innovation support, and data transparency for policymakers. Emphasizing partnership, speakers discussed the complexity of accommodating diverse national regulations and the critical need for trust and reliability in AI systems—an effort advanced by open benchmarks and collaboration with organizations like ML Commons. The session underscored that partnership-driven, locally adapted, and trustworthy AI ecosystems are paramount for the equitable diffusion of AI benefits worldwide.
- The current summit is the largest to date, highlighting increased inclusivity and collaboration.
- Coordinating international AI efforts faces major challenges, notably in governance and managing the interdependence spanning hardware, software, protocols, ethics, and institutional capability.
- Microsoft announced it is on track to spend $50 billion to support AI infrastructure and access in the Global South by 2030.
- Microsoft committed to providing AI skills training for two million Indian teachers, aiming to boost AI literacy and workforce readiness.
- Key pillars for AI diffusion outlined by Microsoft: infrastructure investment, skilling, multilingual/multicultural AI, local innovation, and data transparency for policy enabling.
- Emphasis was placed on ensuring data centers and AI services are developed respecting national sovereignty, laws, and values.
- Microsoft, Google, and ML Commons are collaboratively advancing multilingual safety benchmarks and open-source evaluation metrics, expanding to languages like Hindi, Tamil, and others.
- Release of new robust and defensible AI benchmarks by ML Commons, including work with the Singaporean IMDA agency.
- Open benchmarking is critical to building trust and sovereignty by making AI systems reliable, measurable, and culturally sensitive.
- Global partnership across private sector, governments, and local communities is viewed as essential for both infrastructure funding and ecosystem adaptation.
- Regulatory diversity (e.g., differences from GDPR) presents ongoing challenges to building universally adaptable AI tools.
Building India’s Digital and Industrial Future with AI
The session at the India AI Impact Summit 2026, featuring leaders from GSMA, Airtel (AEL), and Boron (Idea/Vodafone Idea), centered on the convergence of artificial intelligence, telecom infrastructure, and digital sovereignty within India's Digital Public Infrastructure (DPI). The keynote from GSMA emphasized India's leading role in deploying scalable, trusted digital systems that underpin identity, payments, and public services—showcasing the transformative impact of marrying AI and telecom networks. Panelists highlighted how mobile operators are transitioning from mere connectivity providers to intelligent, programmable trust layers essential for citizen-centric services, fraud mitigation, financial inclusion, and secure data transactions. Practical innovations, such as real-time risk indicators, anti-fraud telecom products, and interoperability initiatives, demonstrate India’s approach to balancing digital sovereignty with global connectivity standards. The session underscored the need to avoid infrastructure duplication and stressed cross-industry collaboration and open APIs. India's DPI blueprint is now not only revolutionizing its own governance and service delivery but is also being exported to other economies, illustrating its global significance.
- GSMA emphasized India's global leadership in digital public infrastructure, particularly with initiatives like Aadhaar and UPI.
- In January alone, India conducted transactions worth 28 lakh crore rupees through UPI, spanning over a billion people.
- Airtel (AEL) reported over 1 million BTS sites and 500 lakh kilometers of fiber powering India's connectivity backbone.
- Over 1,000 edge/hyperscale data centers support the national infrastructure, enabling real-time digital transactions.
- Aadhaar-enabled payment systems handle over 500 million rupees per month, with sub-2 millisecond transaction times.
- Telcos are deploying advanced AI-driven tools for fraud detection, spam prevention, and risk profiling for financial services.
- Recent anti-fraud telecom innovations include real-time spam warnings and friction-inducing alerts during suspicious calls.
- India’s DPI model is being exported to Africa, including bundled hardware, software, and private cloud for secure digital banking and identity.
- Collaboration among limited TSPs and regulators, with shared digital intelligence platforms and APIs, mitigates fraud and enriches transactional context.
- Avoiding parallel digital infrastructures and promoting open APIs (e.g., GSMA's open gateway) is key to preventing fragmentation and ensuring interoperability.
- The future of AI-integrated telecom envisions programmable, trusted networks as active contributors to national governance, resilience, and public trust.
AI That Empowers: Safety, Growth, and Social Inclusion in Action
This session at the India AI Impact Summit 2026 brought together key stakeholders—including representatives from UNESCO, the UN, Estonia, and NASCOM—to discuss the significance of global standards, human rights frameworks, and capacity-building initiatives in fostering responsible and inclusive AI governance. Core themes included the importance of embedding ethics and safeguards in AI by design, the role of multi-stakeholder global dialogue in shaping interoperable governance, practical efforts to close capacity gaps (especially in developing economies), and recognition of the differentiated needs and challenges faced by industry segments, from tech giants to startups. Announcements highlighted UNESCO's launch of readiness assessments in over 80 countries (including India), a new global MOOC on AI ethics with LGAI Research on Coursera, and the initiation of the UN Global Dialogue on AI Governance, co-chaired by Estonia and El Salvador. NASCOM's mission to support India’s tech ecosystem with open assets and capacity-building for responsible AI was also spotlighted.
- UNESCO has launched Readiness Assessment Methodology (RAM) reports in over 80 countries to evaluate and guide responsible AI adoption; India's readiness assessment was completed two days prior to the summit.
- UNESCO and LGAI Research are developing a global Massive Open Online Course (MOOC) on the ethics of artificial intelligence to be delivered on Coursera, focusing on practical tools and 'ethics by design.'
- The UN Global Dialogue on AI Governance, initiated by a General Assembly resolution in August 2025, will convene its first session in July, co-chaired by Estonia and El Salvador.
- Four priority areas for the global dialogue were articulated: ensuring safety/trustworthiness in AI, closing capacity gaps (infrastructure, skills, compute), fostering interoperable cross-border governance, and anchoring AI in human rights and international law.
- Human rights due diligence was emphasized as a pragmatic process for embedding responsible business practices in AI development and deployment.
- NASCOM’s Responsible AI mission, launched in 2021, focuses on building open assets, capacity, and adoption of responsible AI governance across India's tech ecosystem, emphasizing differentiated support for large companies and startups.
- International collaborations (with academia such as Oxford and the Alan Turing Institute) are integral to infusing diverse perspectives into capacity-building and governance efforts.
From Policy to Practice: Building National AI Capabilities
Responsible AI for Children: Safe, Playful, and Empowering Learning
The session at the India AI Impact Summit 2026 featured leaders from LEGO Education discussing the critical importance of AI literacy for children, the launch of LEGO's new computer science and AI educational product, and the guiding principles ensuring these tools are safe, ethical, inclusive, and hands-on. Speakers emphasized that AI should not be a black box for students but rather a system they can understand, deconstruct, and ultimately build upon. LEGO’s approach centers on equipping all children—not just a technological elite—with foundational AI and computer science concepts, promoting collaborative, active, and ethical learning. The new product, freshly announced and rolling out to schools in April, incorporates activities that teach AI principles such as probabilistic reasoning, model bias, and ethical data use through playful, group-based experiences. In addition to detailed demos and classroom workflows—including a hands-on 'Strike a Pose' lesson—the session underlined the importance of supporting educators with resources and training, recognizing most teachers are not originally specialists in this field. Finally, LEGO advocates prioritizing creativity and imagination in AI education, seeing childhood as a crucial developmental phase rather than merely a market segment, and positioning play as an enabler for a more inclusive, empowering AI future.
- LEGO Education unveiled a new computer science and AI education product to be launched in schools in April 2026.
- The product was first announced in January 2026 and is designed to teach AI and computer science fundamentals through collaborative, hands-on learning.
- AI literacy is defined as a modern essential, to be placed on par with mathematics, reading, and problem solving—not an elective for the few.
- Strong emphasis on guiding principles: child safety, student agency, universal design (inclusivity for neurodiversity and different learning needs), transparency around data provenance, and privacy (all AI runs locally; no data leaves devices or is sent to third parties).
- Live demonstration of a classroom lesson ('Strike a Pose') showcased teaching kids to train custom AI classifiers using real data, highlighting core AI concepts like probabilistic reasoning, model bias, and ethical use.
- The ecosystem is built to support non-specialist teachers with turnkey lesson plans and a teacher portal, aiming to democratize and scale up access to AI literacy.
- LEGO positions play and creativity as central to AI learning, arguing that these elements enable a more inclusive and empowering technological future for children.
AI for Social Good: Using Technology to Create Real-World Impact
The session at the India AI Impact Summit 2026 highlighted India's leadership in utilizing AI for population-scale transformation across education, healthcare, and agriculture, emphasizing the necessity of open digital public infrastructure and interoperability. Senior leaders from Google, the Gates Foundation, World Bank, and pioneering Indian organizations discussed their joint efforts—from language digitization projects to AI-powered agricultural networks—demonstrating tangible social benefits and global scalability. Key initiatives named include Google’s multilingual AI collaborations (Bhashini, Project Vani), Gemini-powered agriculture pilots in Uttar Pradesh, and a commitment to open, decentralized AI networks. The panel advocated for developing global standards, risk-profiling in healthcare, and the expansion of India's open network blueprints to other countries, all while ensuring digital inclusion and local language access. Google reaffirmed its commitment with continued partnerships and funding, underlining the summit’s focus on responsible, inclusive, and globally coordinated AI innovation.
- AI has already demonstrated population-scale impact in India, particularly through education, healthcare, and agriculture.
- Google’s AlphaFold protein database, used in 190+ countries, has significant adoption in India, the 4th largest user globally.
- Project Vani (Google & Indian Institute of Science) completed its second phase, releasing speech data for over 100 Indic languages (20 digitally recorded for the first time) via the Bhashini mission.
- Google and the World Bank are scaling India-born open network blueprints globally (Brazil, Nigeria, Ethiopia, Kenya), starting with a Gemini-powered agricultural network in Uttar Pradesh offering multilingual AI agents for farmers.
- $10 million Google.org grant established the Network for Humanity Foundation to develop open, interoperable digital infrastructure and innovation labs from Singapore to Switzerland.
- Google.org supports Wani AI to embed intelligence into India’s digital infrastructure, aiming to empower 1.44 million frontline health workers and provide AI-driven malnutrition early warnings.
- AI-enabled education initiatives have reached 10 million Indian students/educators, targeting 75 million students and nearly 2 million educators by 2027.
- India’s open network approach (leveraging UPI, Bhashini, Beckn) is proposed as a reference model for digital and AI public infrastructure globally.
- The importance of multilingual, agent-based AI and the removal of language barriers for true mass inclusion was emphasized.
- The World Bank’s AgriConnect initiative in Uttar Pradesh uses open standards and networks to deliver consistent, user-centric services, with plans to expand the model to healthcare and education.
- Healthcare panelists called for building health data stacks (phenotypic, genomic, demographic, treatment, etc.) using India’s consent-based, open-source models to enable AI-powered universal healthcare and risk profiling.
- Speakers stressed the importance of open, decentralized, interoperable AI infrastructure to close the digital divide and drive global digital inclusion.
Multistakeholder Partnerships for Thriving AI Ecosystems
The session, featuring representatives from the UN Development Program, German government, Salesforce South Asia, Wadwani AI Global, and Tata Consultancy Services, focused on the dual potential for AI to drive positive change in sustainable development while simultaneously risking the exacerbation of global inequities. Panelists highlighted the need for responsible, inclusive, and multi-stakeholder-driven AI ecosystems, supported by robust frameworks, governance, and open access. Discussions touched upon recent initiatives such as the Hamburg Declaration on AI for Sustainable Development and demonstrated through real-world examples how AI and digital technologies have already facilitated financial inclusion at scale in India. There was consensus that governments play a crucial role in closing the 'power gap'—not merely an innovation gap—by investing in equitable data resources, open-source solutions, and widespread upskilling. The private sector, meanwhile, underscores democratizing technology, skill-building, and ethical standards as essential to AI’s societal benefits. Both sectors emphasized that the adoption of technology in India is not the challenge; rather, purposeful design, infrastructure, policy safeguards, and collaboration are key to ensuring AI benefits all, not just a privileged few.
- AI holds transformative promise for sustainable development but current adoption risks deepening inequality if not managed responsibly.
- The Hamburg Declaration on AI for Sustainable Development is advancing a global, multistakeholder commitment to responsible AI.
- Only 17% of global venture capital reaches populations representing over 90% of humanity; data infrastructure is even more skewed, with the global south having just 0.1% of global data center capacity.
- Wadwani AI has deployed 40 AI solutions impacting over 150 million people, providing examples in healthcare, agriculture, and education.
- Salesforce's 'Trailblazer' skilling initiative in India has created a community of 3.9 million, the world's second largest after the USA.
- Democratized technology and infrastructure, open-source, and policy safeguards—including privacy and ethical considerations—are necessary to ensure broad-based access and trust.
- Policy intervention, inclusive skilling, and multi-sector partnerships are seen as pivotal to close power and access gaps, especially for small and medium enterprises and underserved communities.
- India's digital transformation (e.g., direct benefit transfer, UPI) underscores how technology can enable inclusion, but framework and governance are essential for sustainable AI adoption.
Agentic AI in Focus: Opportunities, Risks, and Governance
The panel discussion at the India AI Impact Summit 2026 focused on the evolving business applications and policy considerations of Agentic AI—a class of artificial intelligence systems capable of autonomous action. The session began with Austin Mayron, Acting Director of the Center for AI Standards and Innovation (Casey), outlining a key policy shift by the US: a move from AI safety to fostering standards and innovation under the Department of Commerce and NIST. Casey has launched initiatives such as an AI agent standards program and sector-specific listening sessions to understand industry needs, particularly in security, verification, and the adoption barriers in sectors like healthcare, education, and finance. Business leaders from Synopsis, Mastercard, and NetApp then shared practical Agentic AI uses. At Synopsis, 'agentic engineers' are automating complex chip and system design, augmenting—not replacing—human engineers to accelerate innovation and manage escalating complexity. Mastercard leverages Agentic AI for real-time fraud detection and secure payment processing, emphasizing operational autonomy with robust human oversight and value constraints. NetApp is deploying Agentic AI to enhance data quality and readiness for AI at the storage layer, addressing security and the need for real-time risk management. Panelists collectively highlighted the shift from assistive AI to operational, autonomous multi-agent systems, with ongoing challenges in security, oversight, and policy adaptation.
- The Center for AI Standards and Innovation (Casey), formerly the US AI Safety Institute, now emphasizes setting standards and fostering innovation within the Department of Commerce.
- Casey in collaboration with NIST is launching an AI Agent Standards Initiative, including an RFI on agent security and public comment on AI identity and verification.
- Sector-specific listening sessions targeting healthcare, education, and finance will identify industry barriers to Agentic AI adoption.
- Synopsis is implementing 'agentic engineers'—AI agents that complement human engineers—in electronic design automation, accelerating product innovation and managing complexity in chip and system design.
- Mastercard applies Agentic AI for real-time fraud detection and transaction security, with strict operational boundaries and mandatory human oversight.
- NetApp leverages Agentic AI agents at the storage controller level for improving data quality, preparing data for AI use, and bolstering cybersecurity.
- Panelists emphasized moving from assistive to operational AI, with multi-agent ecosystems introducing new opportunities and security challenges.
- Security, trust, and regulatory oversight were flagged as ongoing priorities as enterprise adoption scales.
Why Science Matters in Global AI Governance
The session at the India AI Impact Summit 2026 opened with a keynote from the Secretary General of the United Nations, Antonio Guterres, emphasizing the critical need for evidence-based, science-driven, and internationally coordinated AI governance. Guterres announced the formation of an independent, globally representative scientific panel on Artificial Intelligence, comprising 40 confirmed experts, to provide a trusted, shared analytic baseline for policy decisions. The panel aims to deliver its first report ahead of a global dialogue on AI governance in July. Renowned AI researcher Professor Yoshua Bengio, recently appointed to this panel, stressed the challenges posed by AI's rapid development, scientific uncertainty, and the need for inclusive, neutral, and multidisciplinary scientific synthesis. He advocated for adaptive high-level principles and technologically embedded guardrails to anticipate and address both acute and ambiguous risks. Microsoft's Brad Smith underscored the enduring value of multilateral institutions like the United Nations in navigating technological transformation, drawing historical parallels to the UN's foundational role in maintaining global stability and the necessity of reinvesting in such institutions for the AI era. Collectively, the session highlighted the urgency of harmonizing global standards and keeping human agency central in AI governance to avoid fragmentation, ensure safety, and maximize societal benefit.
- UN Secretary General Antonio Guterres announced the launch of an independent international scientific panel on AI, confirmed by the General Assembly with 40 global experts.
- The panel's mandate: to close AI knowledge gaps, measure real AI impacts, and provide a shared, evidence-based baseline for international policy.
- First report from the panel is expected ahead of a global dialogue on AI governance in July 2026.
- The panel is fully independent, globally diverse, and multidisciplinary to address AI's impact across all sectors and societies.
- Emphasis on science-based, rather than hype-driven, governance to prevent policy fragmentation, reduce risks, and enhance interoperability (e.g., enabling startups in New Delhi to scale globally to shared standards).
- Professor Yoshua Bengio highlighted the difficulty of AI governance due to scientific disagreement, rapid technological pace, and uneven AI capabilities.
- Bengio stressed the need for high-level policy principles, adaptable technological guardrails, and iterative feedback between scientists and policymakers.
- Concern for impacts on the Global South and the importance of inclusive, equitable international forums.
- Microsoft's Brad Smith reinforced the need to reinvest in the United Nations as a central institution to address global technology challenges, drawing parallels with humanity’s ability to manage existential risks such as nuclear weapons.
- Across speakers, there was consensus that human oversight, transparency, and accountability in AI decision-making must remain paramount.
Global Enterprises Show How to Scale Responsible AI
The session brought together leading AI minds from Infosys, IBM, Nvidia, and Meta to discuss the evolving landscape of trustworthy and responsible AI, reflecting on the industry's shifting attitudes towards trust, governance, and security as AI adoption scales. Panelists highlighted the rapid change from viewing security as an afterthought to placing it at the forefront, the complexities introduced by scaling AI to billions of users—including both technical and governance-related failures—and the dangers of anthropomorphizing AI. They offered multi-dimensional frameworks for defining trustworthy AI, emphasizing end-user confidence, security, compliance, functional safety, and cybersecurity. The discussion underscored the divergent perspectives of industry, regulators, academia, and policymakers, while converging on the urgent need to build trust both at scale and at the system level to unlock the next wave of AI innovation.
- Infosys, IBM, Nvidia, and Meta leaders provided perspectives on scaling responsible, trustworthy AI.
- Trust, governance, and security have shifted from afterthoughts to central priorities in AI adoption over the last 24 months.
- Organizations initially managed AI governance via basic tools (e.g., Excel), but this has proven unsustainable for scaling.
- Scaling AI to billions of users exposes weaknesses not in infrastructure itself, but in the underlying systems, security controls, and governance processes.
- Meta's perspective cautioned against the anthropomorphization of AI, urging a focus on technical and ontological realities instead of humanizing AI agents.
- Different stakeholders (regulators, industry, academia, policy makers) have distinct, sometimes conflicting definitions and requirements for trustworthy AI.
- Key non-negotiables for trustworthy AI: user confidence, passing security tests, risk monitoring (limiting hallucinations), and legal/regulatory compliance.
- Nvidia contributed a tripartite framework for trustworthy AI: functional safety, AI safety (including bias testing and validation), and cybersecurity.
- Panelists agreed that trust is foundational for AI to reach its full potential and scale responsibly.
National Disaster Management Authority
This panel session at the India AI Impact Summit 2026 explored the integration of artificial intelligence (AI) within national and global disaster risk reduction (DRR) and resilience systems. The discussion emphasized the need for institutional, policy, and technological reforms to embed AI in disaster governance beyond pilot-scale innovations, focusing on both physical and virtual (cybersecurity) risks. Key speakers, including leaders from Mauritius, the UK Met Office, Hewlett Packard Enterprise India, Vasar Labs, and Google Cloud India, emphasized topics such as the importance of digital twins, human-in-the-loop decision processes, data interoperability, sovereign data architectures, infrastructure scaling, co-development of AI models, trustworthy and explainable AI, and the critical role public-private partnerships play in operationalizing AI for disaster resilience. The panel underscored the challenges faced by resource-limited countries, the imperative for blending machine learning with traditional models, and the importance of standards, transparency, and scalable computing infrastructure to enable timely, trusted, and actionable disaster response.
- India proposes institutionalizing AI within national resilience frameworks, integrating meteorology, space tech, and digital platforms with community-centered dissemination.
- Mauritius outlined a policy focus on bridging physical and virtual disaster resilience, emphasizing digital twins, cybersecurity, and maintaining human oversight in AI-driven systems.
- Deployment of cell broadcast early warning systems in small island states like Mauritius includes safeguards to prevent 100% automation in issuing life-critical alerts.
- UK Met Office is developing hybrid physics-based and machine learning weather models, advocating step-by-step blending, robust benchmarking, and co-development partnerships, especially for low-resource settings.
- Private sector representative (HPE India) highlighted India's deficit in supercomputing infrastructure: only 40 petaflops capacity to support AI-powered real-time alert systems, compared to 1–2 exaflop systems in the US.
- Emphasis on designing interoperable AI systems that respect sovereign data architectures and are compatible with diverse governance structures, crucial for countries with federal/state divisions like India.
- Standards for explainability and transparency in AI-powered decisions are critical, especially for life-saving applications, with advocacy for humans remaining involved in the decision loop.
- Public, private, and multilateral partnerships are recognized as essential for scaling AI-driven resilience solutions nationally and globally.
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI
The session at the India AI Impact Summit 2026 focused on the transformative potential and practical challenges of deploying AI at scale across India's diverse ecosystem. Panelists highlighted the move towards voice-driven interfaces in native languages to enhance accessibility, the necessity of heterogeneous compute architectures catering to varied connectivity and power realities, and the rise of edge inferencing for improved resilience and reduced data center reliance. Key bottlenecks identified included infrastructure limitations (notably power and compute availability), security and trust challenges (including model vulnerabilities and adversarial threats), and gaps in accessible high-quality data, especially for enterprise and government use cases. Solutions discussed revolved around fit-for-purpose infrastructure deployment, robust policy for model governance, domain-specific vertical applications, and hybrid energy strategies critical for India’s context. The session emphasized collaborative approaches among technology providers, policymakers, and end users, with a focus on sovereign AI models, user-centric security, and sustainable, scalable infrastructure.
- Deployment of AI interfaces in 14 native Indian languages, prioritizing natural voice interaction to broaden access.
- Advancements in heterogeneous compute: ability to run up to 10-billion-parameter multimodal models on smartphones, and sub-1-billion-parameter models on wearable devices.
- Edge inferencing is emphasized to ensure AI experience is unaffected by variable network connectivity, reducing data center load.
- India is home to nearly 300 GenAI startups focused on application-layer innovation, positioning the country as a leader in generative AI applications.
- Power and compute infrastructure are major bottlenecks; global data center power demand expected to reach 63 GW in coming years.
- Cisco and ecosystem partners focus on tailored 'fit-for-purpose' AI infrastructure, with security and data quality as critical priorities.
- Security concerns cover model integrity (adversarial attacks, hallucinations), need for visibility across the stack, and protecting user data.
- Data gap persists: enterprises and government own superior datasets that need to be leveraged to train sovereign and domain-specific models.
- Recommendation for regulated, context- and domain-specific models in sensitive sectors such as education, with governance akin to content censorship.
- Critical infrastructure must adopt dynamic, resilient architectures (including edge and hybrid compute) to maintain national security and service continuity.
- Total Cost of Ownership (TCO), power usage efficiency (PUE), and sustainability highlighted; India's physical constraints (land, water, power) necessitate hybrid energy and off-grid approaches.
- Trust in AI systems is context-dependent and complex, requiring mathematical rigor and robust policy frameworks.
- AI deployment must balance centralized (data center/cloud) and distributed (edge/device) approaches for security, reach, and sustainability.
Smart Regulation: Rightsizing Governance for the AI Revolution
The session addressed the pressing issue of AI governance in a fragmented global landscape, advocating for pragmatic coalition-building over unattainable multilateral consensus. Panelists emphasized the widening 'AI divide' between powerful nations and developing or smaller countries, highlighting critical issues such as limited access to compute infrastructure, data silos, and unequal opportunities to participate in AI innovation. The conversation underscored the necessity of collaborative models—particularly around shared resources, data pooling, and open ecosystems—to mitigate these disparities and promote more equitable AI adoption, especially in sectors like healthcare, education, and climate resilience. While pessimistic about the current state of international institutions, speakers recognized the potential of strategic collaborations, sovereignty-focused governance, and the adoption of trusted, interoperable frameworks to bridge the AI capacity gap worldwide.
- Global consensus on AI governance is currently unrealistic due to geopolitical competition, especially between the US and China.
- Partial, issue-specific alignments and pragmatic coalitions offer the best chance for meaningful progress.
- International institutions are in decline, making coalition-building around trusted mechanisms increasingly important.
- A significant 'AI divide' risks outpacing the earlier 'digital divide', exacerbating inequality between countries with and without access to advanced AI resources.
- India is positioned relatively well, but smaller and developing nations face acute challenges in accessing sufficient compute power, quality datasets, and advanced infrastructure.
- Data silos—especially in developing countries—limit both representativeness and the quality of AI systems trained for local needs.
- Foundational infrastructure concerns, such as reliable and clean power and connectivity, are often overlooked but critically shape AI readiness.
- Skills gaps exist in both AI users and AI builders; addressing them is necessary for broader participation.
- Pooling resources, shared compute/data initiatives, and collaborative frameworks (including open source) were highlighted as practical solutions.
- Governance alignment driven by sovereignty and strategic autonomy is emerging as a key motivator for cooperation among resource-constrained countries.
Digital Democracy: Leveraging the Bhashini Stack in the Parliament of India
The session at the India AI Impact Summit 2026 focused on the formal launch of a comprehensive policy report and developer toolkit aimed at building an open and responsible voice technology ecosystem in India. This initiative, a result of a productive Indo-German partnership, emphasizes digital inclusion by leveraging open voice AI for public services, particularly benefiting linguistically and culturally diverse, underserved communities. The report advocates a proactive government role—beyond regulation—to act as a steward and convenor for digital public goods. It outlines a four-pillar policy framework: treating foundational data sets as public goods, institutionalizing sustainable open-source infrastructure, building open and representative models, and strengthening responsible deployment. The accompanying developer toolkit translates these principles into actionable guidance, promoting diversity, data quality, and embedding responsible AI practices throughout the voice technology life cycle. Announcements highlighted pioneering open voice models for nine Indian languages, collaborative engagement with various Indian and German stakeholders, best practice sharing, and the alignment with international responsible AI commitments like the Hamburg Declaration.
- Formal launch of a policy report and developer toolkit for open and responsible voice technology in India.
- The initiative is a product of Indo-German collaboration involving Bhashini, Digital Futures Lab, Art Park, Trillegal, and NASSCOM.
- Nine open voice AI models have been created for Indian languages through the partnership’s Fair Forward initiative.
- Voice technology is highlighted as a key enabler for digital inclusion for populations with low literacy or limited digital access.
- Policy framework calls for foundational data sets as public goods, sustainable open-source infrastructure, open and representative models, and responsible deployment.
- Government roles are redefined from regulator to steward, ecosystem convenor, and standard setter—beyond traditional frameworks.
- Developer toolkit recommends best practices for diversity, data quality, context-relevant benchmarks, multi-source data strategies, and continuous post-deployment monitoring.
- Report emphasizes community engagement, consent protocols, and privacy-enhancing techniques for responsible AI development.
- Alignment with the Hamburg Declaration on responsible AI and SDGs, endorsed by over 50 stakeholders worldwide.
- Sustained innovation is encouraged through continuous data collection, active creation of digital public goods, and iterative improvement cycles.
India’s Roadmap to an AGI-Enabled Future
The session at the India AI Impact Summit 2026, anchored by Chariot co-founder Subraat, addressed India's ambitious quest to build sovereign frontier AI models and the critical infrastructure required to achieve this vision. Key leaders from the energy, compute, and research sectors—Sri G. Prasad G. (Central Electricity Authority), Mr. Tarun Dwa (E2E Networks), and Professor Jay Deva (IIT Delhi)—outlined the formidable challenges and significant advances in scaling up national power, cloud computing hardware, and local talent. With data centers rapidly expanding from tens to thousands of megawatts and India's renewable energy output surpassing 250 GW, the nation is gearing up to meet the massive, variable power demands of AI era. Efforts are underway to enable multi-source, green-powered, distributed data center design, and rapid expansion of transmission and storage infrastructure. On the compute front, local cloud hyperscalers like E2E Networks are democratizing access to world-class GPU infrastructure, reducing costs, and supporting over 10,000 innovators. Professor Deva emphasized the urgent need for talent and research breakthroughs to maximize the value of these investments. Collectively, the panel reinforced that India is rapidly evolving from a service destination to a creator and global provider of AI infrastructure and innovation.
- Chariot is among Indian companies mandated to build frontier AI models as part of the country's sovereign AI mission.
- Data center capacity in India is scaling from 10-50 MW to gigawatt (1000+ MW) sizes in major cities, with at least 16 GW visibility nationwide.
- New reliability standards (n+1+1) demand four-layered redundancy for power supply to AI-driven data centers.
- India aims to supply AI data centers with green power, leveraging a mix of solar, wind, battery storage, and hydro power.
- India surpassed 250 GW total renewable energy capacity and added 40 GW in just 10 months (April-Jan), targeting 50 GW new capacity this year.
- Plans for 100 GW hydro-pumped storage and a roadmap for 100 GW nuclear by 2047 (22 GW by 2032/34) to ensure 24/7 green energy supply.
- Transmission lines for data centers can be built in 24-36 months, dramatically faster than global averages (5-10 years elsewhere).
- India is expanding its grid connectivity regionally (Nepal, Bhutan, Bangladesh, plans for UAE, Saudi Arabia, Singapore, Sri Lanka), potentially turning India into a global data center hub.
- E2E Networks, now India's first AI-focused hyperscaler, enables affordable enterprise-level AI compute for 10,000+ innovators with advanced GPU infrastructure.
- Panel consensus: To truly lead, India must invest in indigenous talent, research, and ecosystem development, not just import hardware and models.
AI 2.0: The Future of Learning in India
The session at the India AI Impact Summit 2026 centered on the launch of the latest CPRG (Center for Policy Research and Governance) "AI in School Education" report, following last year's landmark study on AI in higher education. CPRG's work, noted for its comprehensive mapping of AI adoption in various societal sectors, sparked national debate and dialogue on AI's evolving role in education, jobs, and future skills in India. The report detailed findings from a Delhi-based survey of private school students, revealing that nearly 50% use AI tools multiple times weekly, predominantly for academic research and assistance. However, students still overwhelmingly prefer traditional human learning and YouTube or ICT platforms over AI tutors, citing issues such as AI hallucinations and lack of adaptive or personalized learning. Distinguished panelists contextualized these findings by emphasizing the need for educational institutions to adapt rapidly to paradigm shifts triggered by AI, drawing parallels to the IT revolution and highlighting the global race in AI research output. The key takeaway was a call for India’s education and governance systems to prioritize ethical, inclusive, and creative integration of AI, ensuring that technology augments human potential rather than replaces it.
- CPRG launched the 'AI in School Education' report, building on its previous study on AI in higher education.
- Survey in Delhi private schools found ~50% of students regularly use AI tools for academic purposes.
- Generative AI platforms are popular, but students still prefer human interaction and YouTube/ICT-based learning.
- Key barriers to AI adoption: prevalent AI hallucinations (incorrect information), lower accuracy in calculations/logical tasks, and lack of adaptive/personalized responses.
- Perceived helpfulness of AI is high for both school and entrance exam prep, but AI tools are seen as supplementary rather than substitutive.
- CPRG announced upcoming reports on AI's impact on jobs and future skills (e.g., 'future of job fear' report next year).
- Panelists stressed that institutions must adapt or risk obsolescence; reference to the rapid transformation post-COVID and the global AI research race led by China and the US.
- Emphasis on the need for AI to expand human creativity, not diminish it, with policy dialogue and cross-sector collaboration as a guiding focus.
Driving India’s AI Future: Growth, Innovation, and Impact
The session at the India AI Impact Summit 2026 focused on unveiling Dell Technologies' comprehensive AI blueprint aimed at accelerating India's journey toward AI leadership. Dr. Vive Mohindra emphasized the tripartite strategy of 'Invest, Innovate, and Evolve,' highlighting the need for massive compute infrastructure, inclusive access for MSMEs and startups, workforce upskilling, and adaptive regulatory frameworks that maintain a balance between innovation and responsibility. The discussions underscored India's rapid growth in AI workloads (over 30% CAGR) and the significant gap between current and needed GPU capacity (50,000 available vs. ~200,000 required). Panelists called for policy interventions like GST waivers and income tax benefits to ease infrastructure acquisition and spur local enterprise engagement. Additionally, the importance of trust—encompassing confidence in institutional frameworks, inclusive digital participation, and public-private partnerships—was cited as a core non-technical pillar for sustainable and widespread AI adoption in India. The blueprint is positioned as a pivotal roadmap driving India's vision of 'Vixit Bharat 2047,' aiming for AI-driven productivity, public service modernization, opportunity creation, and digital sovereignty.
- Dell Technologies unveiled an AI leadership blueprint for India, focusing on accelerating AI infrastructure, innovation, and governance.
- India's AI compute growth projected to exceed 10x FLOPS with AI workloads growing at over 30% compound annual rate.
- Current GPU supply for AI is 40–50,000 against an estimated immediate need of 200,000, highlighting the infrastructure gap.
- Key pillars of the blueprint: 1) Invest in scalable, sovereign compute and data foundations, 2) Innovate with workforce skilling and collaboration, 3) Evolve responsible, adaptive governance with security-first principle.
- Policy suggestions included GST waivers on hardware imports for AI service providers, income tax benefits for Indian providers, and equitable incentives as global players.
- Emphasis on public-private partnerships to unlock the full potential of sovereign AI in India.
- Trust infrastructure—ensuring confidence in digital systems and transactions—highlighted as a non-technical but essential component of sustainable AI adoption.
- Advances aligned with national vision 'Vixit Bharat 2047,' targeting AI’s role in productivity, modern public services, opportunity expansion, and strategic autonomy.
Building the AI-Ready Future: From Infrastructure to Skills
The session, led by an AMD executive, highlighted AMD's commitment to providing a comprehensive AI compute ecosystem, spanning from edge devices to major core infrastructure, with a focus on open standards and international collaboration. Drawing from experiences with US Department of Energy initiatives, such as the Genesis mission, the speaker discussed the critical role of government, public-private partnerships, and sovereign AI in accelerating scientific discovery, energy innovation, and national security. The session emphasized the necessity of integrating federated compute and secure data sharing, fostering R&D through open ecosystems, and maintaining a human-in-the-loop approach for AI governance. Technical milestones were showcased, including the introduction of AMD’s high-efficiency, high-performance GPU racks capable of mind-boggling exaflop performance at manageable power footprints. The roadmap foresees AI usage and computing capacity growing rapidly, underscoring opportunities for national economies like India to become key actors in cutting-edge AI and semiconductor supply chains. The collaborative approach advocated for open standards, talent development, and open hardware/software infrastructure to catalyze AI adoption and innovation globally, especially as India embarks on its own sovereign AI initiatives.
- AMD provides a full AI capability suite: from AI PCs to core infrastructure to the edge.
- The US Department of Energy's Genesis mission leverages AI to accelerate scientific discovery, focusing on discovery science, energy, and national security.
- US government spends ~$1 trillion annually on R&D, with around 20-30% from government; research output efficiency is declining and AI is proposed as a solution.
- Genesis mission emphasizes public-private partnerships and international collaboration (potentially including Japan, EU, UK, etc.) for sovereign AI.
- American Science Cloud: joint infrastructure project running on AMD MI355 clusters targeted at driving innovation in government-priority domains.
- Governance in AI highlighted: 'human-in-the-loop' approach to validate AI agent outputs before operationalizing, especially in sensitive domains.
- AMD rack shown at the summit delivers 2.9 exaflops of FP4 AI compute with 72 GPUs at only 220 kW—showcasing dramatic improvements in compute efficiency.
- By aggregating such systems, achieving 'zera scale' (1000× current exascale) in the next decade is projected, enabling previously unthinkable problem-solving.
- Open-source and open standards are key: AMD committed to open hardware/software to avoid vendor lock-in, foster innovation, and support startups.
- India is encouraged to integrate open ecosystems, leverage its Android/open-source expertise, and insert itself in the semiconductor supply chain at various technology nodes—not restricted to cutting-edge nanometer sizes.
- AI user base is experiencing exponential growth (from 1 million to 1 billion, aiming for 5 billion), making the technology the fastest-adopted in human history.
- AMD hardware is deployed globally across hyperscalers and national labs, underlining its leadership in national mission-critical and AI workloads.
- Session closes with enthusiasm for India's potential role in sovereign AI efforts and the broader open, collaborative AI innovation landscape.
Designing India’s Digital Future: AI at the Core, 6G at the Edge
The session, led by Mr. Ashok Kumar, Deputy Director General of the Department of Telecom (DoT), Government of India, emphasized India's transition from being a passive consumer to an active creator in the global technology landscape, particularly in AI and 6G. Mr. Kumar outlined the evolution from 2G through 5G, highlighting that previous technology cycles focused on connecting humans and, later, machines. The forthcoming 6G era marks a paradigm shift where AI will be natively integrated into every component of the network—from user devices to the core—enabling 'ubiquitous intelligence' as envisioned in the ITU's framework. He discussed key government initiatives, including extensive support for startups and MSMEs to participate in global standards bodies, over 100 6G R&D projects under the 6G Accelerated Research Program, the establishment of 100+ 5G labs nationwide, and collaboration with the Bharat 6G Alliance for policy and tech guidance. The panel discussion reinforced these points, detailing the vertical integration of AI into devices, networks, and the edge, India's strategic focus on energy efficiency in AI deployments, and the massive expected growth in AI-driven data traffic, projected to reach 30% of total traffic by 2033. The session underscored the critical role of coordinated industry, government, and academia efforts to ensure India not only participates but shapes the global 6G technology and standards ecosystem.
- 6G, as per the latest ITU framework, will feature native AI integration across all levels—from devices to core and edge—embodying the principle of 'ubiquitous intelligence.'
- India is prioritizing moving from a technology consumer to an active standards-setter, with emphasis on local IP and tech contributions becoming part of international standards (3GPP, ITU).
- DoT supports Indian startups and MSMEs to join standards-development organizations (SDOs) like TSDSI and 3GPP at a significantly reduced membership fee (₹10,000 instead of ₹5–6 lakh), facilitating greater participation.
- The 6G Accelerated Research Program has funded over 100 6G-related projects in areas including terahertz technology, AI, machine learning, semantic communication, and sensing.
- Dedicated national testbeds for 6G technologies—such as terahertz and AOC (Artificial Optical Communications)—have been established to spur domestic development.
- The Bharat 6G Alliance, in conjunction with government ministries (DoT, MeitY, DST), is driving collaborative standard-setting, spectrum, device development, and policy formulation for 6G.
- India launched and operationalized 100 5G labs across academic institutes, serving as hubs for future 6G research and use-case development.
- Panel experts outlined the emergence of distributed AI inferencing, with an estimated 30% of mobile data traffic in 2033 projected to be AI-driven, per Nokia Bell Labs research.
- A strong focus was placed on ensuring AI energy efficiency to avoid excessive resource consumption in data centers and networks.
- Upcoming 3GPP Release 21 is anticipated to be the first formal 6G release, with Indian stakeholders aiming for early and active contributions.
The Foundation of AI: Democratizing Compute & Data Infrastructure
The panel at the India AI Impact Summit 2026 discussed the pressing challenges and pathways for democratizing AI infrastructure—focusing on access to computing power, data, talent, and responsible frameworks. Speakers highlighted the current global imbalance in access to AI resources, with a majority of datasets concentrated in high-income countries and minimal representation from developing regions, especially Africa. Addressing these disparities, panelists advocated for federated data-sharing approaches, locally owned data management, the development of open AI models and public infrastructure, and the importance of AI literacy. They forecasted a shift in AI compute needs as models transition from knowledge accumulation to more intelligent, efficient architectures—though fundamental hardware breakthroughs remain elusive. With initiatives like India’s proposed ‘Methri’ platform—an international, modular, digital public infrastructure for AI—countries could collaborate while maintaining data sovereignty and avoiding new dependencies. The session stressed that a combination of technical innovation, policy action, and multistakeholder collaboration is essential to ensure AI’s benefits reach all societies equitably.
- Panelists outlined five critical needs for AI democratization: access, computing power, data access, talent building, and responsible AI policy.
- Serious lack of access to computing power and datasets in developing regions—over 80% of global datasets are in high-income countries; less than 2% in Africa (with sub-Saharan Africa having near zero percent outside South Africa).
- Language diversity remains a major barrier—Africa has over 2,000 languages, underlining the need for inclusive data collection.
- Open models, open weights, and open-source frameworks are seen as necessary (but not sufficient) conditions for democratizing AI.
- New technical strategies like federated learning are recommended to maintain data sovereignty while enabling global model training.
- The World Bank perspective emphasizes that local ownership and management of data is a key indicator of a country moving from AI consumption to AI creation.
- Transition anticipated from large, knowledge-accumulating AI models to smarter, smaller, more efficient models, with power consumption and inference costs as ongoing challenges.
- Digital public infrastructure (DPI), modeled after systems like India's Aadhaar, should be interoperable, trusted, and modular, allowing countries to retain agency and avoid dependency.
- India introduced the 'Methri' platform: a global, modular, multistakeholder AI infrastructure to be developed as a digital public good focusing on compute, data, models, and talent, with governance by design.
- Panelists emphasized the importance of federated, cooperative approaches to ensure sharing without new dependencies and to preserve local control over data, especially in the context of linguistic and cultural diversity.
Indo–German AI Collaboration: Driving Economic Development and Social Impact
The session at the India AI Impact Summit 2026 explored the dynamic and deepening collaboration between India and Germany in artificial intelligence and innovation, with high-profile representatives from research institutes, industry leaders, and policymakers. Highlighting 18 years of Fraunhofer's presence in India and its significant contributions to applied AI research, speakers discussed a range of Indo-German initiatives, including the recent signing of the India-Germany IIA pact to strengthen partnerships in AI-powered industry, talent exchange, and joint research. Emphasis was placed on leveraging India's robust AI talent pool and Germany's precision engineering expertise to create scalable, trustworthy, and sustainable AI solutions in sectors such as manufacturing, healthcare, agriculture, and environment. Key technical themes included secure cloud data spaces, federated AI training, virtual colleagues to preserve organizational knowledge, and applications prioritizing industrial and social good. The session emphasized that government, academia, and industry collaboration—backed by strong regulatory environments and focused investment—is vital for maximizing AI's potential for social welfare and economic growth.
- Fraunhofer has operated in India for 18 years, earning over €70 million from research contracts and forging strong partnerships with Indian industry and government.
- India ranks third in global AI R&D activity and accounts for 15% of the world's AI talent pool, boasting the highest AI skill penetration rate.
- Germany has funded over 60 sustainability-focused AI projects since 2020, with investments in AI lighthouses for climate and environmental protection.
- A new India-Germany IIA pact was launched, focusing on collaborative implementation in AI for industry, manufacturing, talent development, mobility, research, and infrastructure.
- Fraunhofer institutes produce two patents per working day and have pioneered major innovations like MP3 and white LEDs.
- The session highlighted trustworthy and reliable AI, including industrial AI solutions such as virtual colleagues for knowledge retention and federated AI training to protect sensitive data.
- Fraunhofer's data spaces enable secure, rule-based data sharing and scale up to 10,000 transactions per second.
- India's National AI Mission is investing over $2 billion to strengthen its AI ecosystem.
- AI is being applied to key sectors like agriculture, healthcare (diagnostics, personalized medicine), manufacturing (Industry 4.0), logistics, energy, and environment.
- Both countries emphasized the need for responsible and ethical AI to promote inclusive growth and address societal challenges.
Safe and Responsible AI at Scale: Practical Pathways
The panel discussion at the India AI Impact Summit 2026 focused on the critical challenge of making massive, fragmented data silos across public and private sectors 'AI ready.' Experts from government, industry, and research institutions highlighted that while India has vast troves of digitized data, its utility is severely constrained due to lack of interoperability, standardization, and trust. The session underscored the need for common frameworks for AI readiness, open source and federated data architectures, and attention to both contextual and verifiable data to power effective AI-driven solutions for policy and industry. Key initiatives like Data Commons and benchmarks for answer stability across LLMs were discussed as emerging best practices. The need for collaborative approaches involving all stakeholders—government, academia, industry, and end-users—was repeatedly emphasized as central to bridging the current information divide and unlocking data-driven innovation and inclusive decision-making.
- India's enterprise and government data remains largely inaccessible to AI due to fragmentation, siloed formats, and trust concerns.
- Only data that is interoperable, cleansed, contextually linked, and AI-ready can unlock the promise of AI for practical use cases, impacting domains from MSMEs to health and education.
- Vast examples cited, such as MSMEs generating 5 million new compliance-related queries annually, highlight the scale of the data accessibility problem.
- The Ministry of Statistics and Programme Implementation (MOSPI) emphasized that there is no uniform framework or definition for AI data readiness in India, and called for a collaboratively built, shared framework distinguishing foundational versus aspirational readiness.
- Data Commons (a Google-led initiative) was presented as a working example of converting public and global data into a standardized, open-source, federated knowledge graph, now used by the UN SDGs and others.
- Open sourcing and decentralizing data architectures allows organizations to retain control and governance over their own data while promoting interoperability.
- Emphasis placed on the necessity of contextualizing data, with examples from health and child development where actionable insight requires orchestrating multiple data sources.
- Emergence of the need for standardized benchmarks to test stability and trustworthiness of LLM-generated answers to the same questions across models and sessions.
- Panelists highlighted key barriers: the challenge of managing multi-dimensional data, shortage of AI/data literacy, data contextualization, problems with domain-specific vocabulary in Indian languages, and prevalence of unverifiable, self-reported public data.
- Collaborative foundational work and iterative frameworks are prioritized over seeking instant transformation, given evolving technology and sociotechnical considerations.
AI Meets Agriculture: Building Food Security and Climate Resilience
The session at the India AI Impact Summit 2026, focusing on AI for Food and Climate Resilience, marked a watershed moment for Indian agriculture and digital transformation. Hosted by Maharashtra's government, it featured the launch and review of the pioneering Maha Agri AI Policy 2025-2029 and the operationalization of Mahavistar, India's first large-scale AI advisory platform for farmers. Distinguished leadership emphasized the critical role of artificial intelligence in addressing climate threats, rising input costs, and information fragmentation in agriculture, especially for smallholders and women farmers. Key announcements included over 2.5 million users of the Mahavistar platform, rollout of traceability digital public infrastructure (DPI), integration of tribal languages, and a compendium documenting global AI use cases for agriculture. The state’s policy and technological leadership were hailed as a scalable model for delivering inclusion, transparency, and food and climate resilience across the global south. Collaborative efforts between central and state governments, global institutions, and the private sector were presented as central to institutionalizing responsible, interoperable, and inclusive AI at population scale.
- Maharashtra launched the Maha Agri AI Policy 2025-2029 for large-scale AI integration in agriculture.
- Mahavistar, India's first AI-powered agriculture advisory network, has over 2.5 million farmer users in Marathi and the tribal language Bili.
- The state is deploying an open, consent-driven Maha Eex data exchange platform for interoperable agriculture data.
- Maharashtra announced a replicable blueprint for traceability digital public infrastructure (DPI) to ensure supply chain transparency and export competitiveness.
- A compendium of global AI use cases in agriculture was released, documenting successful solutions from Africa, Asia, and Latin America.
- The government committed to prioritize responsible AI governance, inclusion (especially of women and smallholder farmers), capacity building, and investment to scale AI adoption.
- National and state collaboration frameworks are being developed to align AI deployments with India's digital public infrastructure and ensure population-scale impact.
- Integration of AI for hyper-local weather, pest monitoring, market advisories, and credit scoring to support climate resilience and higher farmer incomes.
- The 'AI for Agri 2026' global conference in Mumbai will further advance operational strategies for AI in agriculture.
Inclusive AI Starts with People, Not Just Algorithms
This panel at the India AI Impact Summit 2026 explored the journeys and impact of pioneering women leaders and entrepreneurs in India's AI ecosystem. The discussion highlighted how grassroots community-building, risk-taking, and inclusive innovation are driving the rapid scaling of human potential in the AI sector. The speakers traced the evolution of India’s technology landscape from the early IT boom to today’s AI revolution, underscoring the pivotal role of networks like AI Kiran, which has dramatically expanded from a small group to a 10,000-strong community propelling women’s participation and leadership in AI. Case studies such as iMerit demonstrate how AI-driven companies are leading not only in technological innovation—including healthcare, automotive, and agriculture solutions—but also in workforce inclusion, with 53% women employees and centers established outside traditional metro hubs. The session called for proactive efforts to ensure AI’s growth is inclusive from the outset, learning from past technological revolutions’ shortcomings, and emphasized the importance of building diverse, resilient talent pipelines to shape responsible AI for social good.
- AI Kiran, a community for women in AI, has grown from 250 named members to over 10,000, with a goal to multiply further.
- Initial efforts showed ChatGPT could only identify 10 women in AI in India; AI Kiran's launch raised this to 250, showcasing the need for representation.
- AI Kiran is forming partnerships to accelerate its impact and expects significant membership growth in the near future.
- iMerit, a 10-year-old AI company, employs over 10,000 people, with about 3,500 in India and 53% women in its workforce.
- iMerit has established AI centers of excellence in non-metro cities like Kolkata, Visakhapatnam, Khammam, Hubli, and Shillong, specializing in areas from autonomous mobility to healthcare.
- Work at iMerit encompasses both computer vision and generative AI, supporting large foundation model companies as well as development of small, fine-tuned models for real-world societal applications.
- iMerit's model includes collaboration with global domain experts (mathematicians, medical specialists, agronomists) to ensure AI systems are robust and relevant.
- Cultural shift: Both panelists emphasized inclusive innovation—making sure AI’s benefits and opportunities are equitably distributed from the outset, rather than rectifying inequities later.
- Women leaders in the panel highlighted the importance of male allies and community support in advancing gender diversity in tech.
- Entrepreneurial risk-taking and reinvention are recurring themes—panelists urge starting over if AI offers new trajectories, emphasizing growth mindsets and audacious goals.
The Innovation Beneath AI: The US-India Partnership powering the AI Era
The session at the India AI Impact Summit 2026 focused on the fundamental infrastructure powering the AI revolution, emphasizing the need to close the gap between high-level commitments and on-ground capacity, particularly in areas such as clean energy, semiconductors, critical minerals, and data centers. Key panelists discussed cross-border collaboration, especially between the US and India, highlighting major investments like Google’s $15 billion commitment, the development of rare earth corridors in India's union budget, the launch of a global minerals framework by 54 countries, and the rapid scaling of AI-driven companies. Industry leaders provided perspectives on the need for foundational innovation beyond software—stretching down to minerals and energy grids—as well as the changing landscape for entrepreneurs enabled by smart tools and global talent pools. The panel projected massive infrastructure buildouts, the risks of overcapacity, and opportunities for startups to leverage new AI tools to more efficiently bring products to market. The discussion underscored India’s central role in global AI infrastructure, as seen through recent large-scale investments and partnerships.
- Google committed $15 billion to India, with a focus on clean energy and a gigawatt-scale AI hub.
- Four new subsea cables are planned between the US and India, enhancing connectivity for data-driven AI infrastructure.
- India’s union budget includes provisions for developing 'rare earth corridors' to secure critical minerals for AI and semiconductor manufacturing.
- 54 countries, two weeks prior to the summit, launched 'Forge', the first global framework focused on mineral supply chains essential to AI.
- Vulcan Elements, backed by $1.4 billion from the US government, is working to localize America’s rare earth magnet supply chain, previously 90% dependent on China.
- AI’s energy infrastructure currently consumes about 10%+ of power, driving urgent needs for clean, renewable grid innovation.
- Entrepreneurs can now leverage AI tools and global talent (notably India’s) with a fraction of previous capital requirements, potentially reducing barriers to entry and accelerating go-to-market timelines.
- There is a risk of infrastructure overbuild, leading to a future market correction, but this may make resources cheaper and more accessible for startups.
- Physical infrastructure—across energy, minerals, data centers, and network cables—is now seen as vital as software in AI value creation.
- Major global tech leaders, including Sundar Pichai and Jensen Huang, are characterizing this as the largest infrastructure buildout in human history.
How AI Is Transforming Diplomacy and Conflict Management
This session, hosted by the Belelfer Center's Emerging Tech Program, introduced the 'Move 37 Initiative'—a global effort to explore how artificial intelligence can responsibly augment diplomacy and negotiation. The team, comprising international policy experts, researchers, and practitioners, outlined the complexities inherent in diplomatic negotiations involving multiple stakeholders, vast information flows, and shifting dynamics. They argued that while AI tools, including language models and game-theoretic approaches, have transformative potential to support research, analysis, strategizing, and execution within negotiation processes, human authority and oversight must remain paramount. The panel emphasized the urgent need for modular, transparent AI systems with clearly scoped and institution-specific applications, and highlighted their call for international collaboration as essential for building trusted, future-ready AI-enabled diplomatic practices.
- Launch of the 'Move 37 Initiative' focusing on AI's role in augmenting diplomacy and negotiation.
- Program seeks global collaboration and expertise to build practical AI solutions for complex policy and governance challenges.
- Diplomatic negotiations are characterized by high complexity, with multiple countries, competing interests, high stakes, resource constraints, and long-term consequences.
- AI applications identified include autonomous research agents, real-time translation, strategic simulations, automated evidence synthesis, and gap analysis.
- The team warns against over-reliance on large language models (LLMs) due to challenges with transparency, accountability, and strategic misrepresentation.
- Three core challenges for AI in diplomacy: dynamic and evolving negotiation contexts, strategic deception, and aggregating divergent priorities.
- Commitment to three principles: human authority must remain central, AI systems must be modular and transparent, and augmentation must be contextually tailored.
- Significant influence of past experience in international agreements (e.g., EU AI Act, UN AI advisory bodies) informs project direction.
- A call to action for international partners, evidence-sharing, and input to ensure that AI in diplomacy is developed ethically and effectively.
Advancing Scientific AI with Safety, Ethics, and Responsibility
This session at the India AI Impact Summit 2026 concentrated on the emerging risks and governance challenges associated with AI-enabled tools in sensitive scientific domains, particularly the life sciences. Participants discussed how the advent of sophisticated bio-design tools and large language models (LLMs) is transforming the risk landscape by decoupling scientific risk from traditional physical containment, thus requiring new, adaptive, and decentralized oversight mechanisms. The imperative to balance open science with security was emphasized—proposing differentiated, tiered access rather than one-size-fits-all restrictions, and contextual governance norms tailored to regional, sociocultural, and resource disparities. Experts advocated for integrating AI evaluation frameworks into institutional biosafety and information security practices, capacity-building for AI-bio security, and the establishment of regular, independent risk evaluations. The conversation highlighted the gaps in current regulatory approaches, limitations of foreign-centric benchmarks for Southeast Asia, and called for unified, participatory, and adaptable policy frameworks that embed technical and ethical safeguards, especially for low-resource and diverse sociocultural settings. There was a consensus on the importance of institutionalizing independent red-teaming and risk assessment, potentially via AI safety institutes linked to international norms, with routine (e.g., six-monthly) risk reviews and government investment.
- AI-enabled bio-design and LLM tools are transforming risk management in the life sciences, shifting the focus from physical containment to upstream design and digital governance.
- There are currently over 1500 AI-powered bio-design tools fundamentally altering scientific processes and risk landscapes.
- Decentralized and adaptive oversight mechanisms are needed; centralized, government-alone inspection approaches are inadequate for rapidly evolving AI risks.
- "Web of prevention"—multiple, overlapping measures—should be adopted rather than relying on single points of control.
- Balancing open science with security requires tiered access, contextual norms, pre-deployment assessments, and avoiding blanket restrictions, especially to support low-resource and developing regions.
- Differentiated governance at the capability level is preferable to blanket, access-level restrictions for open-source and scientific collaboration.
- India stands third globally in AI readiness, but Southeast Asian countries lag (e.g., Indonesia ranked 49th), indicating a need for region-specific frameworks.
- Leading LLMs failed to mitigate 20–30% of biological risks in Southeast Asia due to lack of sociocultural attunement and safe benchmarks.
- Policy frameworks must include participatory, end-user-centered design to account for regional diversity, technical needs, and resource constraints.
- India endorses self-regulation and voluntary risk mitigation commitments, but unified frameworks adaptable to local contexts are still missing.
- Routine, independent red-teaming and risk assessments (suggested every six months) should be institutionalized, drawing on models like the International Atomic Energy Agency but adapted for the dual-use, widely accessible nature of biotechnology.
- An AI safety or security institute, credentialed and independent but working closely with governments and anchored to international norms (e.g., Biological Weapons Convention, WHO), was recommended.
- Investments in capacity-building, technical evaluation protocols, and cross-institutional data sharing with tiered confidentiality are crucial.
Welfare for All | Ensuring Equitable AI in the World’s Democracies
The session at the India AI Impact Summit 2026 focused on strategies for democratizing the economic value of artificial intelligence, emphasizing global standards, local adaptation, and bridging the AI skills gap in developing nations like India. Panelists comprising leaders from Google, Microsoft, L&T Technology Services, Rapid7, and academia discussed the risks of AI's economic benefits being concentrated in western and Chinese corporations and highlighted the need for international collaboration, contextualized standards, and co-creation between governments and industry. Concrete initiatives such as Google's Indiq Gen Bench for Indian languages, Microsoft's Elevate skilling commitments, and the coalition for secure AI frameworks were spotlighted. A recurring theme was the importance of adaptive regulation, open-source frameworks, workforce upskilling, and public-private partnerships to ensure India and the Global South are equipped both to innovate and to participate equitably in the global AI economy.
- Estimates suggest up to 70% of AI's economic value could be concentrated in western economies and China without intentional democratization.
- International AI safety report authored by 100 experts underscores the need for accelerated open standards and customized metrics suitable for local languages and cultures.
- India-specific measures: Google launched the Indiq Gen Bench supporting 29 Indian languages, 12 scripts, and 4 language families to localize large language models.
- Advocacy for continuous, not one-time, audits of AI systems and standards to prevent temporal drift as technology evolves.
- Google's Secure AI Framework and coalition for secure AI are expanding in APAC to address open-source supply chain risks.
- Google and partners are focusing on open-source standards, interoperability, capacity building, and upskilling for AI safety and adoption.
- Microsoft highlighted its five-part holistic approach: infrastructure investment, AI compute scalability, multilingual/multicultural adaptation, local partnerships, and diffusion measurement.
- Microsoft Elevate initiative: commitment doubled to upskill 20 million Indians by 2030, with 5.6 million upskilled already and new educator programs in partnership with Indian government ministries.
- Emphasis on adaptive regulation—global standards should be localized to India’s needs, such as bandwidth constraints and linguistic diversity, to avoid stifling innovation, especially for startups.
- Call for co-creation over traditional technology transfer approaches, making standards enablers of innovation rather than compliance hurdles.