TikTok's Algorithm Pushed My Daughter to Suicide
How Social Media Companies Deliberately Feed Depression for Engagement
My fourteen-year-old daughter Molly spent six hours a day on TikTok watching content the algorithm fed her about depression, self-harm, and suicide methods, and when I finally checked her phone after she hanged herself in her bedroom, I discovered that the platform had systematically shown her a curated feed of content designed to keep her engaged by making her mental health worse, and internal documents prove the company knew exactly what it was doing and chose profit over the lives of vulnerable children.
The lawsuit my family filed against TikTok and its parent company ByteDance will likely take years to resolve and may ultimately be dismissed based on Section 230 protections that shield social media platforms from liability for user-generated content, but the discovery process has already revealed internal documents showing that company executives were aware that their algorithm amplified harmful content to vulnerable users, that they specifically identified teenagers with depression and anxiety as a high-engagement demographic to target with content that would keep them on the platform longer, and that they deliberately chose not to implement safeguards that would reduce this harm because those safeguards would also reduce engagement metrics that determine advertising revenue, and while the company publicly claims to prioritize user safety and mental health, the internal emails and strategy documents tell a very different story of cynical exploitation of the psychological vulnerabilities of children for profit. Molly was a normal happy child until she turned thirteen and got her first smartphone, and like millions of teenagers she immediately gravitated to TikTok where the endless scroll of short videos provided constant stimulation and social connection, and for the first months her feed was typical teenager content including dances, comedy sketches, makeup tutorials, and videos about school and relationships, but gradually and then suddenly her feed shifted to darker content about mental health struggles, depression, eating disorders, and self-harm, not because she searched for this content but because the algorithm identified her as someone who engaged more deeply with emotional and psychological content and began serving her more of it.
The mechanism by which social media algorithms radicalize users into increasingly extreme content is well-documented in the context of political extremism, where YouTube and Facebook algorithms have been shown to push users from mainstream conservative or liberal content toward increasingly radical and conspiratorial material because extreme content generates stronger emotional reactions and longer engagement times, and the same dynamic operates with mental health content where users who watch one video about depression or anxiety will be algorithmically fed increasingly intense content about these topics, creating feedback loops where the platform identifies your vulnerability and then systematically exploits it by showing you content that makes you feel worse, because feeling worse keeps you scrolling looking for validation or solutions or simply for the numbing effect of passive consumption, and the more you engage with this content the more the algorithm learns about your specific triggers and preferences and serves you increasingly personalized variations designed to maximize your time on platform. Molly's phone which I have preserved as evidence shows a TikTok feed in her final weeks that was almost entirely content about depression, hopelessness, and suicide methods, with videos that romanticized self-harm and presented suicide as a reasonable response to teenage struggles, and mixed in with this content were advertisements for products marketed to teenagers, meaning that companies were paying to reach an audience that TikTok had identified as vulnerable and mentally unwell, and the company was profiting from both the advertising revenue and from the increased engagement that came from feeding depressive content to depressed children.
The internal documents we obtained through discovery include research that TikTok's own data scientists conducted showing that teenage users who engaged with mental health content became more depressed over time and were more likely to report suicidal thoughts, and rather than using this information to implement protections or to modify the algorithm to reduce exposure to harmful content for vulnerable users, executives discussed how to maximize engagement from this demographic while minimizing public relations risk if the harms became widely known, and specific strategies included not keeping detailed records of algorithm decisions so that the company could claim ignorance if challenged, implementing superficial safety features like warning screens and resource hotlines that research showed were ineffective but that provided legal and PR cover, and focusing mental health initiatives on older users and less vulnerable demographics where the reputational benefits were higher and the actual risk of harm was lower. The most damning evidence is an email chain from a senior product manager acknowledging that reducing depressive content recommendations for at-risk teenagers would decrease engagement metrics by an estimated twelve percent in that demographic and suggesting that this was an acceptable trade-off for user safety, and a response from an executive stating that a twelve percent engagement decrease was not acceptable and that the safety team should find solutions that did not impact growth metrics, essentially explicitly prioritizing profit over the mental health and lives of teenage users.
The broader context of this case is that social media platforms have understood for years that their products are harmful to teenage mental health, with internal research from Facebook leaked by whistleblower Frances Haugen showing that Instagram makes body image issues worse for one in three teenage girls and that the company knew its platforms contributed to anxiety, depression, and suicidal thoughts in young users but chose not to implement changes that would reduce these harms because those changes would also reduce engagement and revenue, and TikTok's business model is even more dependent on algorithmic content delivery than Facebook or Instagram, meaning that the company has even stronger incentives to optimize for engagement regardless of psychological consequences, and the result is a product that functions almost like a drug, providing dopamine hits through novel content and social validation while gradually making users more anxious, depressed, and dependent on the platform for emotional regulation and social connection. The teenage brain is particularly vulnerable to these manipulations because the prefrontal cortex responsible for impulse control and long-term planning is not fully developed until the mid-twenties, while the limbic system responsible for emotional reactions and reward-seeking is highly active, creating a neurological imbalance that makes teenagers naturally inclined toward immediate gratification and emotional intensity, and social media platforms exploit this vulnerability by delivering constant stimulation and social feedback that hijacks normal development and creates dependency patterns that look remarkably similar to addiction in brain imaging studies.
The comparison to tobacco companies is apt because both industries knowingly sold products they knew were harmful while publicly denying those harms, and both targeted young people because early adoption creates lifetime customers who are difficult to lose even after harms become apparent, and both hid internal research showing health risks while funding external research designed to muddy the waters and create doubt about causation, and just as tobacco regulation eventually caught up to industry practices after decades of denial and litigation, social media regulation is beginning to catch up to platforms though the process is slow and the companies are fighting every attempt at meaningful oversight or liability, arguing that they are neutral platforms rather than publishers and that holding them responsible for user-generated content would destroy the internet as we know it, when the reality is that their algorithms are not neutral but are designed to maximize engagement through psychological manipulation, and acknowledging that they are responsible for the content they algorithmically amplify would not destroy the internet but would force them to prioritize user wellbeing over growth metrics.
Molly died three weeks before her fifteenth birthday, and the note she left made references to feeling like life was hopeless and like she would never be happy, language and sentiments that appeared frequently in the TikTok videos she had been consuming, and while we cannot prove definitively that TikTok caused her suicide because mental health and suicide causation are always multifactorial and complex, we can prove that the platform knowingly fed her content that made her depression worse, that it had the technical capability to reduce this harm but chose not to because doing so would impact revenue, and that the company's own research showed that its product was contributing to suicidal ideation in teenage users like Molly, and while the legal system may ultimately decide that Section 230 protections shield the company from liability, the moral reality is that TikTok and similar platforms are profiting from the psychological destruction of vulnerable children and teenagers, and that this is not an accidental side effect but a deliberate business strategy where user harm is an acceptable cost of maximizing engagement and growth. The changes that would be needed to make social media safer for teenagers are not technologically difficult but would require companies to prioritize user wellbeing over profit, including turning off algorithmic amplification for users under eighteen and showing only chronological feeds from accounts they choose to follow, implementing real age verification rather than the joke of current systems where children simply lie about their birthdates, creating mandatory time limits for teen users regardless of whether they want them, and removing the infinite scroll and autoplay features that are designed to prevent users from natural stopping points, and if platforms claim these changes would destroy their business model, that is an admission that their business model depends on exploiting the psychological vulnerabilities of children, and such business models should not be permitted to exist regardless of how much money they generate for shareholders.
About the Creator
The Curious Writer
I’m a storyteller at heart, exploring the world one story at a time. From personal finance tips and side hustle ideas to chilling real-life horror and heartwarming romance, I write about the moments that make life unforgettable.



Comments
There are no comments for this story
Be the first to respond and start the conversation.