Editor’s Notes: This is a guest post by Lynn Yu Ling Ng, a Banting Postdoctoral Fellow in the Department of Politics at York University, Canada. She has research expertise in global care labor migration, with an area focus on East Asia (Singapore and Taiwan). Her current research program advocates for migrant care workers and elderly communities in and beyond Canada.

“The amount of personal information being requested in rental applications has become absurd. I feel like I’m applying for a top-secret clearance just to rent a basement suite.”
– 25-year-old renter, Vancouver, BC

“When you’re competing with 50 other people for an apartment, you don’t have the luxury of saying no to invasive screening. You either consent or risk being homeless.”
– 32-year-old renter, Vancouver, BC

If you’ve applied for rental housing in Canada lately, you know the drill: provide references, fill out the required forms, submit to a credit check, and more. Most of us might know that some of the personal information and sensitive data we are being asked to give might be technically illegal or unnecessary, but AI tools make their implications particularly insidious. Increasingly, landlords are relying on AI-powered tools to speed up tenant screening processes, using them to dig through your social media history, scan online behaviour for “red flags”, and compile risk scores based on opaque algorithms. This reality is happening right now across Canada, and it’s making our housing crisis worse for the people who are already struggling the most. 

In an ongoing co-authored paper for the Journal of Urban Affairs, my collaborators and I have come across troubling signs of unethical privacy breaches, usually resulting from an accidental leak or hack, in a financialized housing market that leans into AI landlord tools. In an ideal world, we would still have personal privacy and data justice. 

“We should be able to determine our interactions with technology by debating and, if necessary, resisting and proposing different paths. If we cannot imagine how to regain the kind of privacy we would like, how to allow people to opt out of being surveilled through their data – or even of producing those data in the first place – we may have to reinvent as well as renegotiate”. (Linnet Taylor, 2017: 12)

Landlords might be using AI to Screen You

At the moment, there’s an ongoing joint investigation (announced in June 2024) by the Office of the Information and Privacy Commissioner of British Columbia (OIPC) and the Federal Office of the Privacy Commissioner (OPC) into whether CERTN has met its obligations under federal and provincial privacy legislation. This inquiry is asking key questions: Is more information than necessary being collected? Is there data transparency? What about how the information is used? Are tenants giving informed consent?

A 2018 report by the BC Information and Privacy Commissioner found that 10 out of 13 landlords studied were systematically over-collecting sensitive personal information in violation of the Personal Information Protection Act (PIPA). Many were routinely requiring credit checks from all applicants and asking for information that technically violated the BC Human Rights Code. Even when privacy violations are identified, enforcement is weak, penalties are minimal, and tech companies simply adapt their practices to continue operating. 

My fellow researchers and I found that certain companies like CERTN and Single Key, to name a few, are selling AI screening services to property managers and landlords across the country. These tools promise to make tenant selection faster, more efficient, and more “objective”. But what are they actually doing?

CERTN, a company based in Vancouver, offers landlords what it calls a “revolution in screening tech”: a comprehensive background check on every applicant powered by machine learning. According to their marketing materials, the system can: 

Scan seven years of your social media activity, including posts made about you by other people
Search over 100 databases of personal information, including eviction records, criminal history, and international watch lists
Flag “risky” online behaviour such as hate speech, violence, drug use, or “potentially illegal activities”
Generate automated risk scores to help landlords quickly sort applicants into categories

The mining of your digital life doesn’t stop there. To rent a one-bedroom apartment in Toronto or Vancouver, you’re being asked to surrender more personal information than you would to get a security clearance. All of it can be used to generate a score that determines whether you get housing.

The company’s co-founder has explained to CBC News that they’re not just looking at credit scores or rental history. They want to assess “how clear or credible or kind an applicant is” by analyzing their social behaviour online.

Another Canadian company, Single Key, boasts a more extensive reach. Their website claims to have already screened over half a million tenants through “real-time public document searches of over 200,000 databases across criminal records, court decisions, past evictions, negative press, social media profiles, public biographies, past employment and more”.

The company markets itself as a solution to landlord risk, promising to identify “desirable” and “risky” tenants. According to their materials, positive indicators include credit scores above a certain threshold and glowing landlord references. Negative attributes include criminal history, high debt levels, and vaguely defined criteria for failure.

But the same systems designed to protect landlords from “bad tenants” are disproportionately flagging young people, newcomers to Canada, gig workers, people rebuilding their lives after personal crises, racialized communities, and others in already vulnerable positions.

No one is immune to algorithmic systems that also sweep up false positives at alarming rates. Applicants rarely know what data was used against them or whether it was even accurate to begin with. Outdated records, incorrect information, mistaken identities, and more errors ensue when technology hallucinates.

Is “Consent” Really Consent? 

You consent to all of this surveillance when you click “I agree” on the rental application form. But is it really consent when you’re competing with hundreds of other applicants for every available unit? And saying no to invasive screening is not an option?

Researchers call this structurally coerced consent. In theory, you have the freedom to refuse. But you are not free from the consequences of your choice. In practice, for many people, refusing to undergo these checks means you don’t get housing. That’s not a real choice.

The power imbalance between landlords and tenants has never been more stark. Landlords hold almost all the cards: they own scarce housing in expensive markets, set the terms of applications, and use AI tools that tenants can neither see nor challenge. Meanwhile, renters – often younger, lower-income, and from marginalized communities – are forced to hand over intimate details of their lives for a chance at securing shelter.

Who Gets Hurt Most by AI Tools? The Communities Already Facing Discrimination

Technology doesn’t eliminate bias. It automates, amplifies, and scales it. Data surveillance scholars have documented how AI tenant screening algorithms worsen structural inequality under the guise of efficiency. Let us not forget that old patterns of discrimination are baked into training data: eviction records, credit scores, criminal reports, and more that unfairly target racialized communities. Cycles of exclusion are perpetuated especially against:

Low-income renters who may have spotty credit due to medical debt, job loss, or other life circumstances
Racialized communities, particularly Black and Indigenous renters, who face higher eviction rates not because they’re “riskier” tenants, but because they’re targeted more aggressively by landlords
Newcomers to Canada who don’t have established Canadian credit history or long rental records
Young people and students entering the workforce without years of steady employment
Gig workers and contract employees whose income doesn’t fit traditional employment verification models

Evictions target the vulnerable. A recent study of over 232,000 eviction filings in Toronto over a decade found that landlords, particularly corporate and financial landlords who treat housing as an investment product, disproportionately target low-income neighbourhoods with higher percentages of racialized residents. These landlords file evictions not because tenants are actually failing to pay rent, but because evicting tenants allows them to raise rents beyond legal limits – earning off forced turnovers or “renovictions”. 

 To reiterate, AI doesn’t transcend human bias, but encodes and accelerates old habits of discrimination that have always shaped who gets housing and who doesn’t. Technology’s objective veneer makes this supercharged injustice harder to challenge. One of the most troubling aspects of AI tenant screening is how vague and subjective the category of “risk” has become. These systems claim to be “objective” and “data-driven”, but what makes a tenant “risky” is subjective, and reflects encoded racial biases.

What Information Is Actually Necessary?

The dangers of excessive data collection in housing aren’t hypothetical. They’re already playing out in cities that have embraced AI-driven systems. In Los Angeles, the city operates a “coordinated entry system” for unhoused people seeking shelter. The system uses an algorithm called the Vulnerability Index-Service Prioritization Decision Assistance Tool (VI-SPDAT) to assess and rank people based on their level of need.

The intention is the prioritize the most vulnerable. But unhoused people have to answer deeply invasive questions about their trauma, mental health, substance use, sexual behaviour, and self-harm tendencies, all of which are highly subject to arbitrary judgment. VI-SPDAT collects information about where they go during the day, their past experiences with police, their medical history, their “coping mechanisms, feelings, and fears”, and more. This data is then shared with nearly 170 different organizations: government agencies, police, corporations, medical facilities, addiction recovery centers, and university research hubs, all with permission to use the data for “public messaging purposes”. 

The stated goal is efficiency in getting people housed faster. But the real effect is to sort, rank, and criminalize people based on already existing class and racial biases. Those who score too low are deemed to need “no intervention”. Those in the middle get limited support. Only those who score above a certain threshold qualify for housing assessments.

This is what happens when systems engineering logic replaces human decision making processes. The assumption is that social problems can be solved by collecting as much data as possible, sorting it as quickly as possible, and generating rapid algorithmic decisions. But what actually happens is old biases only get amplified as rental markets move online.

Data Security and Privacy Matters Beyond Housing

Every time we accept another privacy violation as “just how things work now”, we make it harder to draw the line anywhere. The fight over AI tenant screening isn’t just about housing. It’s part of a much larger battle over data rights, privacy, and who gets to profit from our personal information. The same invasive profiling tactics are already spreading to other areas of life: job applications, insurance pricing, loyalty programs, credit scoring, and more. 

These days, we often hear people say that since privacy is already dead in the digital age, we might as well accept total surveillance and move on. But privacy isn’t dead. It has been, and will continue to be, under attack. And there’s a difference. 

What my fellow researchers and I have been uncovering points to at least a few action points:

Legal Limits on Data Collection: Federal and provincial governments must pass laws that clearly define what information landlords can and cannot request. These limits should be based on the principle of necessary proportionality: only information directly relevant to tenancy decisions.
Mandatory Data Transparency: When AI tools are involved in tenant screening, applicants must have the right to know, and understand what factors contributed to their acceptance or rejection. Tenants ought to have access to their own data, scores, and correct errors.
Right to Appeal: Where false judgments are made about an individual in automated decision making, applicants must have the right to appeal and request thorough review. 
Ban on Social Media Surveillance: Landlords should be prohibited from requiring access to applicants’ social media accounts or scanning years of online activity as a condition of tenancy. 
Real Enforcement Penalties: Privacy violations must come with meaningful penalties that actually deter companies from over-collecting data, not token fines that are a slap on the wrist for tech oligarchs.
Special Action Measures for Vulnerable Communities: Anti-discrimination laws must be updated and enforced to account for encoded algorithmic bias. When AI systems disproportionately harm racialized communities, low-income renters, or other already uncared for groups, that must be outlawed as unjust discrimination.

The Bottom Line: A More Digitally Secure Future

“When applying to Vancouver rentals, I was rejected on the basis of an application for an apartment from an AI company called CERTN that helps landlords rate tenants using various background information. When I inquired about our results, the landlord shared risk scores with me. I was deemed as a “medium risk” for property damage and we were also (as a couple) because they forgot to combine (partner’s) and my applications, so the algorithm decided neither of us could afford to live there alone. Instead of asking us to confirm that we’d both be paying, the landlord presumably just chose someone with a higher rating in the system. We had been living together for 7 years and never missed a rent payment or damaged any property, so the discrepancies in our assessments are weird”.  

– 36-year-old renter, Vancouver, BC

Our research came out of stories such as the above that should never happen. The examples of companies and landlords partnering through AI screening tools to make profit off data is growing faster than we can count. As a CERTN blog advises, “Tenant screening that considers more than a credit history is becoming the industry gold standard. Property managers embracing technology around social media scanning and psychoanalytical behavioural profiles will take the cream of the crop while others will be left with what remains. Don’t fall behind the competition. Knowledge is the key and the more you have upfront, the more time, money and headaches you’ll save”.

In the above instance, this renter found out that her partner’s Myspace profile from 18 years ago was included, to which we’d naturally wonder: Is this information even necessary or relevant to renting a place? We can accept a future where algorithms have unlimited access to our personal lives, or we can demand something better. We can demand necessary proportionality: clear limits on what can be asked, strong privacy safeguards, data transparency in automated decisions, and real accountability when systems cause harm.

Experiencing invasive AI screening in your rental applications? As a start, document it: take screenshots, save emails, and note which companies are using these tools. Share your story with friends, on social media, or with tenant advocacy groups. Join others in advocating for digital sovereignty. Your experience matters, and speaking up helps create change. 

We can, and need to, act individually and collectively. The housing crisis in Canada is real and urgent. But the solution isn’t giving landlords unlimited surveillance powers over prospective tenants. We’re at a crossroads. AI tenant screening is here to stay. But we still have a choice about how it’s used.