When the boss is an app: Gig economy workers are giving algorithms human traits

4 minute read

A new academic paper explores how gig workers in algorithm-managed roles attribute human characteristics to digital systems, reshaping workplace expectations and psychological contracts

Sian Harrington

A man with puppet strings being controlled by a digital interface

In the digital work economy the idea of a ‘boss’ is being quietly rewritten. On-demand labour platforms, such as food delivery, ride-hailing and freelance gig apps, rarely offer workers a direct human point of contact. Instead, decisions about task assignment, performance ratings and continued access to work are made through algorithms. And while this shift is often framed as a matter of convenience or efficiency, new research suggests it may be reshaping something far more human: the psychological contract.

In a conceptual paper published in Human Resource Management Journal, Sherman, Rousseau, Carbery, McDonnell, Duggan and Morley offer a new lens for understanding how gig workers interact with algorithmic management. Drawing on the cognitive science concept of Theory of Mind – our ability to attribute mental states to others – they propose that many platform workers engage with algorithms not as cold, mechanical systems but as intentional, even moral, agents.

It’s a provocative thesis. And it raises difficult questions about fairness, transparency and dignity in digital work environments.

Psychological contracts in a post-human manager world

The term psychological contract describes the unspoken expectations workers form about what they owe their employer, and what they expect in return. Traditionally, this contract is shaped by human relationships: interactions with managers, colleagues and organisational norms.

But what happens when those relational cues are stripped away?

In platform-based work algorithms act as gatekeepers to opportunity. They determine when workers are offered jobs, how their behaviour is evaluated and what rewards or penalties they receive. Yet, as the authors argue, these systems rarely offer explanations or transparency. The result is ambiguity, as workers are left to interpret decisions without meaningful feedback or explanation.

Faced with this opacity workers may begin to anthropomorphise the algorithm. That is, they start to imagine that the system has beliefs, intentions or emotions, much like a human boss. These mental models help workers predict what the system ‘wants’, interpret its responses and adjust their behaviour accordingly.

The authors build their argument through synthesis of prior empirical but their central contribution is theoretical: they propose that Theory of Mind is a cognitive mechanism workers use to make sense of algorithmic control and that this process plays a central role in how psychological contracts are formed in gig work.

Why Theory of Mind matters

Theory of Mind refers to our capacity to infer the thoughts and feelings of others based on limited information. This ability underpins trust, cooperation and moral judgement.

When applied to gig platforms, the concept reveals something important: workers may treat the algorithm as though it is a person.

These interpretations aren’t necessarily conscious. But they influence how workers feel about their job, how they evaluate justice and how they respond when outcomes don’t meet expectations.

The authors point to Cameron's (2022) study of how Ridehail appworkers find meaning in their work, where one 18-month veteran employee reacted to the algorithm’s ‘decisions’ by saying it was out to get him. “Even though the algorithm is purportedly designed to treat each worker the same way based on their behaviour, appworkers may feel the algorithm is deciding to penalise them individually,” they say. 

Other app workers believe the system is fair and rewarding loyalty, suggesting a deeper emotional investment than one might expect in a transactional setting. What emerges is a complex and unspoken psychological contract, not between worker and employer but between worker and machine.

In other words, the algorithm doesn’t need to act like a human for people to relate to it as one.

Reframing responsibility: Why this matters for HR

While this paper focuses on platform-based gig work, in particular app work, its insights have implications for a much broader range of work models, especially as more organisations adopt algorithmic systems to support decision-making.

Performance review systems, predictive scheduling, AI-led recruitment platforms and employee monitoring tools are all increasingly part of the workplace landscape. Many of these tools operate with limited transparency or feedback, relying on workers to adapt to the system’s logic.

But if employees are using Theory of Mind reasoning to interpret these tools, then perception becomes as important as design.

People will look for patterns. They will infer intention. They will form beliefs about fairness. And if these expectations are violated, if decisions seem arbitrary or punitive without explanation, the emotional impact can mirror the breach of a traditional psychological contract.

This makes the case not just for technological transparency but for ethical interface design. The systems we deploy to manage people must reflect an understanding of how people relate to those systems, emotionally as well as functionally.

Towards a more human-centred algorithmic design

One of the most compelling contributions of this paper is its call to recognise algorithms as relational actors. Not because they have agency in the philosophical sense but because people treat them as if they do. This has direct implications for how we design work experiences in digitally mediated environments. If we acknowledge that workers are actively interpreting algorithmic behaviour, then:
 

  • Clear communication becomes essential. Workers need to understand how systems work and why decisions are made.
  • Feedback loops matter. The absence of explanation doesn’t reduce interpretation, it amplifies it.
  • Perceived fairness must be a design objective, not just an HR afterthought.
  • HR and tech teams need to collaborate more closely to shape worker-facing systems that respect human cognitive patterns and emotional responses.

Designing for trust in a blended workforce

The gig economy is often treated as a separate labour category but it’s increasingly relevant to mainstream HR thinking. As workforce models become more flexible, and as AI is embedded deeper into organisational infrastructure, the lines between full-time employee, independent contractor and digital worker are blurring.

This paper challenges us to take seriously the cognitive and emotional realities of working under algorithmic control. It reminds us that even in low-touch environments, people seek coherence, justice and human connection. And it suggests that failing to account for this will lead to psychological contract breaches, not with people but with systems.

In the end, designing fair and effective workplaces means designing for the way people think, not just the way algorithms operate.

Anthropomorphising the Algorithm: A 'Theory of Mind' Perspective on Psychological Contract Creation in Gig Work Arrangements" by Ultan Sherman, Denise M. Rousseau, Ronan Carbery, Anthony McDonnell, James Duggan, and Michael J. Morley, published in the Human Resource Management Journal on 10 March 2025, explores how gig workers perceive and interact with algorithmic management systems

Published 20 May 2025
Enjoyed this story?

Sign up for our newsletter here and get your FREE A-Z of the Future of Work For HR