In the thrilling yet daunting quest to integrate artificial intelligence into everyday business operations, there's one overlooked human factor that could either propel your company to new heights or grind it to a halt: psychological safety! But here's where it gets controversial—many leaders swear by strict HR policies alone, yet a groundbreaking global study suggests that might not be nearly enough. Dive in as we explore how fostering a workplace where employees feel secure to voice ideas, take risks, and even stumble without repercussions is proving to be the secret sauce for AI success.
A freshly released worldwide report, crafted through the partnership of Infosys (listed on NSE, BSE, NYSE: INFY) and MIT Technology Review Insights, uncovers that a staggering 83% of business executives recognize psychological safety as a direct influencer on the triumph of corporate AI projects. Building this essential atmosphere in our AI-driven world isn't just about good vibes or generic HR guidelines; it demands forthright discussions on AI's true potential, its boundaries, and the specific scenarios where it's endorsed for use. Through this alliance with MIT Technology Review Insights, Infosys is geared up to arm international decision-makers with valuable perspectives and tactics for embracing AI ethically and extensively, all powered by Infosys Topaz—an AI-centric collection of services, solutions, and platforms designed to lead the charge.
Titled "Creating Psychological Safety in the AI Era," the report (accessible at https://www.technologyreview.com/2025/12/16/1125899/creating-psychological-safety-in-the-ai-era/) delves into why team members frequently shy away from testing new ideas, questioning established norms, or spearheading initiatives out of dread of negative consequences, which stifles creativity despite the availability of advanced tech. And this is the part most people miss—even with hefty investments pouring into AI, the specter of workplace anxiety, especially the terror of messing up, persists as a primary roadblock to widespread adoption.
Despite the lightning-fast progress in AI capabilities, the study points out that it's often people-related issues that are the real bottlenecks for organizations. Apprehension about failing, murky messaging, and leaders who aren't open to feedback frequently leave staff disengaged from AI efforts. Picture this: a company might have all the latest tools and plans ready, but if the environment doesn't feel supportive, those initiatives simply won't take off. The insights emphasize that expanding AI adoption hinges just as heavily on nurturing confidence and adaptability in the team as on rolling out state-of-the-art technology.
Among the report's standout discoveries are these key points, each shedding light on how psychological safety can be a game-changer:
- Environments that prioritize psychological safety—think of it as a workplace where people feel safe to speak up, try bold ideas, and learn from errors without judgment—tend to excel in AI ventures. Over four-fifths (83%) of those surveyed confirm that this safety plays a noticeable role in AI project success, with 84% noting clear ties to real-world business results like improved efficiency or innovation.
- Anxiety is a leadership stumbling block. Roughly one-quarter (22%) of respondents confess to holding back from initiating or proposing AI projects due to worries about criticism or flops, but on a brighter note, three-quarters (73%) feel empowered to share candid input and viewpoints without fear in their organizations.
- Psychological safety isn't a fixed achievement—it's an ongoing pursuit. Less than half (39%) of participants rate their current safety levels as "high," while 48% see it as "moderate." This gap reveals that some firms are rushing into AI without solid cultural underpinnings, which raises questions: Is it ethical to push tech ahead when the human side isn't ready?
- Effective communication and leadership actions are pivotal tools for change. Sixty percent of those polled believe that transparent explanations about how AI will (and won't) reshape jobs would boost safety the most, whereas just over half (51%) stress the importance of bosses demonstrating a willingness to entertain queries, disagreements, and even failures as role models.
- Fostering this safety goes beyond mere goodwill or standard HR measures; it calls for precise details on AI's practical strengths, weaknesses, and sanctioned applications. Honest, continuous conversations enable businesses to emphasize integrity, ethical practices, and involvement from all stakeholders.
As Laurel Ruma, Global Editorial Director at MIT Technology Review Insights, puts it, “Our joint research with Infosys demonstrates that psychological safety isn't just a fluffy concept—it's a quantifiable force behind AI achievements. Executives who articulate AI's effects clearly and exemplify receptiveness to inquiries and pushback set the stage for breakthroughs. Lacking that bedrock of trust, even the smartest AI blueprints are doomed to underperform.”
Rafee Tarafdar, Infosys's Chief Technology Officer, adds, “From what we've witnessed, the top AI overhauls in businesses occur where psychological safety flourishes. When staff are liberated to experiment risk-free, creativity flourishes. This ethos of reliability and candor lets groups tap into AI's true power, yielding significant gains and enduring expansion.”
Sushanth Tharappan, Executive Vice President - HR at Infosys, shares, “Within Infosys, we've nurtured an inventive mindset where our teams are always on the lookout for fresh ways to leverage AI. We've directly experienced how psychological safety speeds up implementation; safe zones for testing and redefining tasks actually simplify the tech side. This study validates that corporations need to blend financial commitments to technology with meaningful cultural shifts to ensure AI's enduring benefits.”
Ultimately, the report stresses that evolving with AI isn't purely a tech expedition—it's equally a journey of cultural evolution. By making psychological safety a priority, companies can cultivate the trust, fortitude, and receptivity essential to fully realizing AI's possibilities. But here's a controversial twist: some might argue that emphasizing 'soft' skills like psychological safety distracts from the hard data of ROI, potentially undervaluing the tangible metrics of AI. What do you think—is this human-centric approach the future of tech adoption, or just another buzzword? Do you agree that fear holds us back more than we admit? We'd love to hear your take in the comments—share your experiences or counterpoints!
About MIT Technology Review Insights
MIT Technology Review Insights serves as the bespoke publishing arm of MIT Technology Review, the planet's oldest continuously published tech publication, supported by the premier global institution for technology. They produce live gatherings and investigations into today's top tech and commercial dilemmas. Insights carries out in-depth qualitative and quantitative studies both domestically in the U.S. and internationally, delivering a broad spectrum of materials such as features, dossiers, visual aids, videos, and audio series.
About Infosys
Infosys stands as a worldwide frontrunner in advanced digital services and advisory. With more than 320,000 dedicated professionals, we amplify human capabilities and forge fresh prospects for individuals, enterprises, and societies alike. We assist clients across 59 nations in mastering their digital shifts. Drawing on over four decades of expertise in overseeing the intricate systems of multinational corporations, we expertly guide them through transformations fueled by cloud and AI. Our AI-native foundation empowers businesses with scalable agile digital solutions, while our perpetual learning culture promotes ongoing enhancement via the exchange of digital know-how, skills, and innovations from our vibrant ecosystem. We're profoundly dedicated to ethical governance, environmental responsibility, and fostering an inclusive space where diverse talents can shine.
Head to www.infosys.com to discover how Infosys (NSE, BSE, NYSE: INFY) can support your organization in tackling its upcoming challenges.
Safe Harbor
Some remarks in this announcement regarding our upcoming expansion possibilities, financial performance, or operational results are prospective in nature and qualify for the protective provisions of the Private Securities Litigation Reform Act of 1995. These involve various uncertainties and risks that could lead to actual outcomes varying substantially from the projections. Such risks encompass, but aren't limited to, challenges in executing our strategic plans, heightened rivalry for skilled personnel, our success in recruiting and retaining talent, rising compensation costs, expenditures on upskilling staff, adapting to a blended remote-office setup, economic volatility and international political tensions, tech advancements and upheavals like artificial intelligence ("AI") and generative AI, the intricate and shifting regulatory environment including shifts in immigration laws, our sustainability goals, our approach to capital distribution and projections about our competitive standing, future activities, earnings, liquidity, assets, corporate maneuvers such as mergers, and data security concerns. Key elements potentially impacting results differently from anticipated are elaborated in our U.S. Securities and Exchange Commission documents, particularly our Annual Report on Form 20-F for the fiscal period concluding March 31, 2025. These are accessible via www.sec.gov. Infosys might periodically issue further verbal or written forward-looking statements, found in SEC filings and shareholder communications. The organization isn't obligated to revise any such statements unless mandated by legal requirements.
Media contact
For additional details, reach out to: PR_Global@Infosys.com