Healthbooq
Online Safety for Children: A Practical Guide for Parents

Online Safety for Children: A Practical Guide for Parents

14 min read
Share:

The first thing most parents reach for when they think about online safety is a parental control app. It's not the wrong instinct, but it's the wrong centre of gravity. The children who come to harm online are not, on average, the ones whose parents had weak filtering software. They are the ones who couldn't tell anyone what was happening. The most protective single thing in a child's online life is whether you are someone they would actually come to.

That doesn't mean throw the controls out. It means understand what they can and can't do, and put the conversational, relational work where it actually belongs.

The Healthbooq app covers digital wellbeing alongside the rest of family life — useful for tracking screen-related mood patterns and conversations had with older children.

The Three Risk Categories

The OECD and UK Council for Internet Safety both organise online risks for children into three buckets. The balance shifts with age, but the categories stay the same.

1. Content Risk

Exposure to material that is harmful or developmentally inappropriate. The major categories:

  • Pornography. UK Children's Commissioner research (Rachel de Souza, 2023): 79 per cent of children have seen pornographic content online by age 16; average age of first exposure is 13; a meaningful minority first encounter pornography accidentally before age 11. Most exposure is now to violent or aggressive material, not the soft material older parents might remember.
  • Self-harm and eating disorder content. Particularly on Instagram, TikTok, and Tumblr. Algorithm-driven recommendation tends to amplify exposure once a child engages with related content even briefly. The Molly Russell inquest in the UK (2022) was a watershed moment — a 14-year-old whose Instagram and Pinterest accounts had been served thousands of pieces of self-harm content in the months before her death.
  • Extremist and radicalising content. Both Islamist and far-right material. PREVENT data and the National Counter-Terrorism Police note that online recruitment of teenagers into extremist movements is a growing pattern.
  • Misinformation and conspiracy material. Particularly affects 11–15-year-olds.
  • Commercial manipulation. In-app purchases, loot boxes, gambling-mechanism game design, "free-to-play" mobile games designed to maximise spend. Affects younger children most.

2. Contact Risk

People — usually adults, sometimes other children — making contact in ways that present risk. The biggest category here is online grooming.

Grooming patterns are now well-characterised by NSPCC, the Internet Watch Foundation, and CEOP (the National Crime Agency's Child Exploitation and Online Protection command):

  • It happens on mainstream platforms — Roblox, Snapchat, Instagram, TikTok, Discord, Fortnite, gaming voice chat. The "dark web" framing is misleading. Groomers go where children are.
  • It is gradual. Initial contact is friendly and shared-interest based ("I'm a fan of that game too"). Trust is built over weeks or months before any sexual content emerges.
  • It involves isolation tactics — encouraging the child to keep the friendship secret from parents and peers, building emotional dependency, framing the parents as people who "wouldn't understand."
  • It uses age deception in many cases — adult predators present as teenagers.
  • It often moves the child to encrypted platforms (Telegram, WhatsApp, Snapchat with disappearing messages) once trust is built.
  • The aim may be on-platform sexual content (images, video calls) rather than physical contact. The shift in the last decade has been heavily toward image-based abuse.

NSPCC Childline data shows reports of online grooming have increased every year since they began tracking it. The proportion of cases involving girls is around 75–80 per cent, but boys are also targeted, often with under-recognition because of stigma and reluctance to disclose.

3. Conduct Risk

Risks arising from a child's own behaviour online — either harmful behaviour they engage in, or behaviour they fall into without realising the consequences.

  • Cyberbullying. Cyberbullying Research Center surveys: around 37–46 per cent of UK and US adolescents report having been cyberbullied at some point. The features that make cyberbullying particularly harmful: it follows children home (no respite), can reach a wide audience instantly, and often happens at the same time as in-person bullying.
  • Sexting / sharing intimate images. Becomes a real risk from around age 12 onward. UK Crown Prosecution Service guidance treats teenagers sharing intimate images of themselves as a safeguarding rather than criminal matter, but the practical harms (image leaving private circulation, blackmail, school-wide distribution) are serious.
  • Sextortion. The fastest-growing pattern: a child (often a teenage boy) is convinced to send an intimate image, then immediately blackmailed for money or further images. NCA reporting in 2023–24 showed a sharp rise in sextortion incidents, including some linked to teen suicides. UK and US police forces have changed messaging to emphasise that the child should not pay, not engage further, and should report immediately.
  • Doxxing and harassment campaigns. Most relevant in late adolescence and around online communities.

Parental Controls: Useful, Not Sufficient

Controls work best for younger children and become progressively less effective as children get older. Use them, but understand their limits.

Network-level (router) filtering

All UK broadband providers (BT, Virgin, Sky, TalkTalk, EE, Vodafone) offer free network-level filtering through their apps:

  • BT Parental Controls / BT Family Insights
  • Sky Broadband Buddy / Sky Broadband Shield
  • Virgin Media Web Safe / Parental Controls in Virgin app
  • TalkTalk HomeSafe
  • Plusnet SafeGuard
  • NowTV Kids Safe

These block broad categories of content (pornography, gambling, self-harm) at the network level. Effective for any device on home wifi, including children's friends' phones when visiting. Don't cover mobile data, public wifi, or VPN-routed traffic.

Device-level controls

  • iOS Screen Time (Apple): app limits, content restrictions, communication limits, downtime, family sharing with parental approvals.
  • Google Family Link (Android): similar functionality, weaker on third-party app restrictions than Screen Time.
  • Microsoft Family Safety (Windows / Xbox): screen time, content filtering, activity reports.

These give granular control and are worth setting up properly on every device a child uses. For an 8-year-old's tablet, this is the right level of intervention.

Platform-level controls

YouTube Kids (a curated, moderated subset of YouTube — appropriate for under-10s; main YouTube is not), Netflix Kids profile, Instagram Family Center, TikTok Family Pairing, Snapchat Family Center. These platform-internal tools have improved sharply since 2022 and are worth setting up.

What controls cannot do

  • They cannot prevent contact via age-fluid platforms (gaming voice chat, Discord, Snapchat) where age verification is essentially absent.
  • They cannot stop a determined teenager who knows their way around them — VPNs, second devices, friends' devices, school networks.
  • They cannot replace conversation about what content actually means (a child who has filters but encounters porn at a friend's house with no scaffolding to make sense of it is in some ways worse off).
  • They cannot detect distress. They detect content categories.

The Online Safety Act 2023 (UK)

The Online Safety Act, fully implemented from 2024–25 under Ofcom enforcement, puts new statutory duties on platforms:

  • Age verification for sites hosting pornography
  • Stronger duties to remove illegal content (child sexual abuse material, terrorism)
  • Specific duties to protect children from harmful content (self-harm, extreme violence, eating disorder material)
  • Transparency reporting requirements

The Act is genuinely useful and is shifting platform behaviour. But enforcement is gradual, technical workarounds exist, and the Act doesn't replace parental involvement. For most US states, the equivalent is a patchwork of state-level legislation (Utah, Texas, California, Florida), without a federal framework as of 2026.

What Actually Works: The Conversation

Sonia Livingstone's EU Kids Online project (LSE), running since 2006, is the largest European data set on children's online lives and the parental factors that protect them. The headline finding, repeated across reports: active parental mediation — discussing online experiences with children — has a stronger protective effect than restrictive mediation alone.

Children who report being able to talk to a parent about online experiences are more likely to report problems, more likely to ask for help, and have better outcomes when something does go wrong. Children with very restrictive parents but no conversation are at higher risk than children with moderate controls and frequent conversation.

The NSPCC and the UK Safer Internet Centre both build their advice around the same principle: stay involved, talk often, react well when your child does come to you.

The mistakes that close the conversation:

  • Reacting with panic or immediate restriction. A child who tells you about something worrying online and watches you immediately confiscate the phone, ban the platform, or rage at the offender will not tell you again.
  • Reacting with judgement. "Why on earth did you message them back?" "What were you thinking sharing your name?" The child has already learnt that lesson; what they need now is to know they can come to you.
  • Lecturing. A scripted "online safety talk" lands less well than ongoing curiosity. "What are you playing? Show me. What do you like about it?" beats "We need to talk about online safety."
  • Pretending you understand a platform you don't. Children sniff this out instantly. Better to ask them to teach you.
  • Treating children as silly for trusting people online. Adults trust people online too. The patterns are sophisticated and easy to fall for.

Age-by-Age Conversations Worth Having

5–7 years (first exposure to interactive content, typically tablet-based):
  • Personal information: don't share your real name, school, or where you live with anyone you don't know in person.
  • "If you see something on the screen that makes you feel weird, close it and come and tell me. You're never in trouble for telling me."
  • Watch what they're watching. YouTube algorithms can route a 6-year-old from a Peppa Pig video into deeply unsuitable content in three clicks.
8–11 years (own device increasingly likely; gaming with online elements; some social media despite age limits):
  • Strangers in games and on platforms — don't accept friend requests from people you haven't met. Don't tell people your real name or school.
  • "If anyone — anyone — asks you for a photo of yourself, especially without clothes, that's a thing to come and tell me about. Right away. You're never in trouble."
  • "If anyone asks you to keep a secret from your parents, that's the warning sign."
  • Grooming language at age-appropriate level — the NSPCC Underwear Rule (PANTS): Privates are private; Always remember your body belongs to you; No means no; Talk about secrets that upset you; Speak up, someone can help.
  • Cyberbullying — what it looks like, what to do, that you'll help if it happens.
12–14 years (social media, group chats, gaming voice chat, dating-adjacent platforms):
  • Image sharing — anything you send digitally can leave your control instantly. "Don't send anything you wouldn't want a parent, teacher, or future employer to see."
  • Sexting and the legal landscape — sharing intimate images of under-18s, including by under-18s, is technically illegal in the UK and most US states. Most cases are managed as safeguarding rather than prosecuted, but the consequences (school discipline, social fallout, blackmail risk) are serious.
  • Sextortion — you cannot get into trouble too serious to come and tell us. If someone is threatening you with an image, stop responding and tell us. We will help. Don't pay.
  • Online friendships with people they haven't met — a friendship of weeks or months with someone they know only online is worth talking about regularly. Not banning, just talking.
  • Mental health content — what is being served by their algorithm. Most teenagers don't know how much their feed is shaped by past engagement.
15–18 years (effectively full internet, often with own data, encrypted messaging, dating apps, financial services):
  • Risk shifts toward conduct and consequences. Sextortion remains a major risk for boys particularly.
  • Consent and intimate images: legal framework, ethical framework, practical framework.
  • Online radicalisation pathways, particularly for boys — what they look like, why people fall into them.
  • Substance and gambling content, online drug purchase risk.
  • Their digital footprint and its implications.
  • Relationship: stay open, stay non-reactive, don't surveil to the point of breaking trust.

Recognising That Something Is Wrong

Signs that warrant a calm, non-confrontational conversation:

  • Increased secrecy about device use; turning the screen as you walk past
  • Spending dramatically increased time online, particularly late at night
  • Withdrawal, irritability, or distress connected to device use
  • New online "friend" they don't want to talk about, especially older
  • Unexplained gifts, gaming credit, or money
  • Mention of someone asking them to keep a secret
  • Sleep disruption, school decline, or withdrawal from peer activities
  • Looking up self-harm, eating disorder, or suicide content (parental control software can flag this; many do)
  • Receiving messages they immediately delete

The right response is not to seize the device. Seizing the device:

  • Destroys the digital evidence police would need
  • Cuts the child off from the support they may need
  • Sends an immediate "I am not safe to come to" signal that closes off future disclosure

The right response is to stay calm, ask questions, listen, and seek help.

What to Do If Something Has Happened

Suspected grooming, contact with a predator, or sextortion:

  • Don't take the device immediately. Tell the child you're going to keep it safe, but the messages need to be preserved.
  • Don't respond on the child's behalf. Engagement makes things worse and can escalate sextortion.
  • Take screenshots of all relevant messages, profiles, and platform usernames.
  • Block the perpetrator on the platform.
  • Report to the platform (in-app reporting on all major platforms).
  • Report to CEOP at ceop.police.uk — the National Crime Agency's online reporting tool. Used to flag concerns or formal reports.
  • Call 999 if a child is in immediate danger.
  • Call NSPCC helpline 0808 800 5000 for advice and support.
  • For US families, CyberTipline at NCMEC (cybertipline.org) is the equivalent of CEOP.

Cyberbullying:

  • Document (screenshots), block, report to platform, report to school if peers are involved.
  • Don't reply.
  • For specifically image-based harassment or revenge porn, the Revenge Porn Helpline (0345 6000 459, UK) supports adults and minors.

Intimate images shared:

  • The Internet Watch Foundation (iwf.org.uk) handles reports of child sexual abuse material including images of children that have been shared. Their Report Remove tool, run with NSPCC and Childline, lets young people under 18 confidentially report intimate images of themselves and have them taken down across major platforms.
  • Children can also call Childline 0800 1111 confidentially for support.

Self-harm content exposure or distress:

  • Childline 0800 1111 (under-19s, confidential)
  • Samaritans 116 123 (any age)
  • GP for ongoing concerns; same-day if there is risk.

What Helps Long-Term

Some things that compound:

  • A device-free family meal, even three times a week
  • Phones out of bedrooms at night, especially for under-14s
  • A shared family "media plan" — not imposed top-down, drawn up together
  • Watching content with younger children, not parking them with it
  • Modelling your own healthy digital behaviour — the way you treat your own phone is what you're teaching
  • Praising disclosure when it happens — "I'm really glad you told me about that" beats any lecture

The internet your child is growing up in is not the internet you grew up in. Outsourcing safety to a platform, a school, or a piece of software is not enough. The role only a parent can fill is being the trusted adult the child will tell. That role is built quietly, over years, from a thousand small interactions in which you reacted well.

Key Takeaways

Online safety isn't primarily about parental controls — it's about whether your child will tell you when something happens. Sonia Livingstone's EU Kids Online work and the NSPCC's research both show that children who feel they can talk to a parent about online experiences are substantially better protected than children with tight restrictions and no communication. The actual risks split into three categories: content (graphic and harmful material), contact (grooming, predators), and conduct (cyberbullying, sexting). The UK's Online Safety Act 2023 puts new duties on platforms, but parental conversation remains the strongest single protective factor. Use controls for under-10s; shift toward judgment and communication from about 11 onward.