Cyberbullying may double risk of self-harm, suicidal behaviour

Cyberbullying may double risk of self-harm, suicidal behaviour


LONDON: Children and youngsters who face cyberbullying are more than twice as likely to self-harm and enact suicidal behaviour, a new study has found.

The research also suggests that it is not just the victims of cyberbullying that are more vulnerable to suicidal behaviours, but the perpetrators themselves are at higher risk of experiencing suicidal thoughts and behaviours as well.

Cyberbullying is using electronic communication to bully another, for instance by sending intimidating, threatening or unpleasant messages using social media.

Researchers from Swansea University, University of Oxford and University of Birmingham in the UK looked at over 150,000 children and young people across 30 countries, over a 21-year period.

The findings, published in the Journal of Medical Internet Research, highlighted the significant impact that cyberbullying involvement (as bullies and victims) can have on children and young people.

The researchers say it shows an urgent need for effective prevention and intervention in bullying strategies.

“Prevention of cyberbullying should be included in school anti-bullying policies, alongside broader concepts such as digital citizenship, online peer support for victims, how an electronic bystander might appropriately intervene; and more specific interventions such as how to contact mobile phone companies and internet service providers to block, educate, or identify users,” said John.

“Suicide prevention and intervention is essential within any comprehensive anti-bullying programme and should incorporate a whole-school approach to include awareness raising and training for staff and pupils,” she said.

The research also found that students who were cyber-victimised were less likely to report and seek help than those victimised by more traditional means, thus highlighting the importance for staff in schools to encourage ‘help-seeking’ in relation to cyberbullying. PTI

Source link Read the rest

Cyberbullying can take lasting toll on teens

Cyberbullying can take lasting toll on teens


Camryn Cowdin was checking her Facebook page when she saw hateful posts from a person she considered a friend. Her name was never used, but she knew the words were about her.

“He would directly reference a comment or situation that happened between him and I,” Cowdin, 16, said. He threatened to end their friendship. He’d say, `You’re dead to us.’ ”

The comments left her feeling depressed, Cowdin said. She cried every night. She didn’t want to go to school.

“He pretty much tried to ruin me,” said Cowdin, a student at Highlands Ranch High School who loves making costumes for Comic Con, journaling and music — a set of headphones often hang around her neck. “I know a lot of beautiful people who have been ruined by social media.”

Cowdin’s experience of being bullied over a social media platform is part of an increasing national trend in cyberbullying.

In a study conducted by the Cyberbullying Research Center, the number of students nationally who reported experiencing cyberbullying nearly doubled from 18.8 percent in 2007 to 33.8 percent in 2016. The report surveyed more than 20,000 middle and high school students across the country from 2002 to 2016.

A 2015 study by the Centers for Disease Control and Prevention found that 15.5 percent of high school students at public and private schools across the U.S. were cyberbullied. In middle schools, 24 percent of students experienced cyberbullying.

Defined as using technology to harass another person, cyberbullying takes many forms: sending a mean text message, posting hurtful comments on social media, spreading inappropriate or embarrassing photos of someone over social media, spreading rumors online.

Whereas bullying occurs in person, cyberbullying allows for anonymity and secrecy.

On apps like Snapchat, messages and photos disappear after a certain amount of time. The Whisper app is used to anonymously post confessions and secrets. On Ask.fm, users anonymously ask and answer questions. Other social media platforms popular among young people include Instagram, a photo-sharing app, and Kik, an app used to instant message friends or strangers.

“Cyberbullying takes it that next step where the chances of the teacher or parent being aware are very low,” said Emily Laux, a pediatric psychologist at Children’s Hospital Colorado. “That’s a concern because we aren’t able to help kids manage it. It’s so much harder for parents and adults to intervene.”

Online bullying can cause lasting damage to teenagers, research shows, resulting in mental health issues such as depression and anxiety, sleep problems, difficulty adjusting to school or, in some extreme instances, suicide.

For some educators, the impact is overwhelming.

“At different times of the day, we would have students come down extremely upset and distraught, broken friendships, things that were said about them, rumors being passed around, name-calling,” said Ann Guenther, assistant principal of Rocky Heights Middle School in Highlands Ranch, which at the beginning of the school year implemented a cellphone ban to counter distractions in the classroom. “When you have students focusing on those pieces, they can’t focus on school.”

Delanie Vieira, a freshman at Rock Canyon High School, was in eighth grade when two girls and a boy started messaging her on the photo-sharing site Instagram. They told her to kill herself, she said.

But Vieira, who has a solid friend group and is self-confident about who she is, didn’t let the words get her down. Instead, she went to the principal.

“I was hurt that they would target me when I felt like I did nothing wrong,” said Vieira. “It shocked me.”

Vieira used to worry about what other people thought about her on social media, she said, but now she has a different outlook.

“At this point, I don’t really care, I’m just sharing photos of my friends. I’m just growing up,” Vieira said. “It’s for my own entertainment and enjoyment. It only matters if my close friends care.”

But for some young people, cyberbullying has had devastating consequences.

In 2015, Colorado passed Kiana’s Law, named after Kiana Arellano, a 14-year-old from Highlands Ranch who in 2013, after receiving hateful messages from classmates online, attempted suicide. She survived, but the lack of oxygen left her a paraplegic and unable to speak. Her mother testified before the Legislature to increase the penalties for cyberbullying. As a result, the act is considered a misdemeanor crime that warrants up to one year in county jail, a fine of up to $1,000 or both.

For the past two years, Sgt. Lori Bronner of the Douglas County Sheriff’s Office has overseen the school resource officer program at Douglas County schools. A resource officer is a deputy who is responsible for student safety on a school campus and also is trained in mental health first aid.

She blames the impersonal aspect of social media platforms for allowing students to say things they otherwise wouldn’t.

“Kids say really mean and hateful things over social media,” she said. “Social media has made it easier because you are not face-to-face. You’re not there to take the brunt if someone wants to say something back to you.”

Students need to know they can report harmful content they see on social media to a teacher or adult, said members of Douglas County School District’s Prevention and School Culture team. Its seven members teach seminars on positive life skills, such as healthy boundaries and substance abuse prevention to students throughout the district.

“Kids feel like they always have to comment or like something,” said Cindy Redfern, a former elementary school teacher on the team. “They can say ‘no.’ If your friend is doing something that you don’t think is OK, you report it.”

Families should have open communication about what is happening on social media and, if needed, parents should intervene, said Anne Metz, also a team member who formerly worked as a registered nurse.

“It’s important for parents to understand that they have the right to be in charge of the cellphone that they are giving to their kid,” she said. “It’s important to talk to their kids about what is expected.”

After her negative experiences on Facebook, Cowdin blocked the person who was posting hurtful comments about her and eventually deactivated her account.

Read the rest
Kenya: Moral Cop Defends Deputy Governor Caught in Compromising Situation

Kenya: Moral Cop Defends Deputy Governor Whose Private Video Leaked


Kenya Film Classification Board (KFCB) boss Ezekiel Mutua has described as barbaric the trending video showing the Kirinyaga Deputy Governor, Peter Ndambiri, naked and at the mercy of a group of men filming him and a naked woman beside him.

The video is circulating on WhatsApp, where it was released on Tuesday evening, and has become the subject of debate on morning radio and Twitter.

Dr Mutua, dubbed Kenya’s Moral Policeman for the stand he often takes on obscenity, jumped onto the wave Wednesday morning, saying filming the deputy governor was beyond justification and describing it as “barbaric, outrageous and a violation of human rights.”

“In my view, the invasion of privacy and primitive harassment of a couple in private, whatever their sin or crime, is the evidence of how low we have sunk as a society. If the video doing rounds is true, then this is a major blot to the Devolution Conference 2018 and a slur on the people of the great county of Kakamega,” said Dr Mutua on Facebook.

The KFCB CEO appeared to have been working with the presumption that the video was shot in Kakamega, where the Deputy Governor was photographed playing football with his colleagues. It is still not clear when and where the video was recorded.

Gatundu South MP Moses Kuria also spoke out on the issue, suggesting that the Deputy Governor was the victim of an extortion ring and that he should have first reported to the police.

He said the film is the work of cartels that are rampant in Kiambu County and asked the Deputy Governor to report to the police.

“I always say it is nonsensical to pay anyone so as not to give a story to the media,” he said on Facebook. “That breeds extortion and encourages criminal networks to thrive. It is a pity that the Kirinyaga DG did not immediately report such extortionists to the police,” he added.

In the video, a group of men slap, harass and whip Mr Ndambiri, who at first says he is a businessman and under pressure says he is a politician and finally cracking and agreeing that he is indeed the Deputy Governor.

Nominated Senator Millicent Omanga in a Facebook post urged police to take up the matter and arrest the people behind the leak that has gone viral.


“Recording someone when he/she is in a compromising position and sharing the same with the public is wrong! Think about the emotional torture and ridicule you’re exposing the family of that person to. No one here is flawless. The police should reign in on the thugs who held the Kirinyaga DG under siege and apprehend them to serve as a punitive lesson to others who may be harboring such intentions in the future,” she wrote.

Nyeri MP Ngunjiri Wambugu on his part however opted to distance himself from the matter saying that Deputy Governor should fight his own battles.

“For those trying to divert attention from our pursuit of justice for Ms. Martha Miano by suggesting that I should also pick up the issue of justice for ‘boy-child’ DG Kirinyaga – please note that DG Kirinyaga is at a much higher pecking order than even I am so he can actually do more about that for himself, than I can do for him. Plus, I focus on helping those less privileged who are dealing with privileged bullies,” he said.

Dr Mutua pointed out the obvious fact that the filming had not been licensed by KFCB and added that it is neither a light matter nor about morality.

“It’s a crime akin to terrorism. It’s the worst form of violation a human being can go through. I shudder to imagine the mental anguish caused to the couple and all those associated with them,” he added.

Dr Mutua asked the Director of Public Prosecutions and the Inspector-General of Police to intervene.

Source link Read the rest

Tribeca doc ‘Netizens’ highlights the online harassment of women


Cynthia Lowen’s Netizens is about three women whose lives have been profoundly affected by cyber sexual harassment. Making its U.S. premiere at Tribeca Film Festival this week, the documentary details the various forms these crimes take, including cyber-stalking, the posting of non-consensual pornography and character attacks. The latter led Tina, a successful businesswoman and one of Lowen’s subjects, to be rebuffed by potential employers, including J.P. Morgan. Through intimate and often moving testimony, as well as brief interviews with experts, Lowen crafts an absorbing documentary that is part biography and part discourse. She does so in a cinéma vérité style, with a minimal use of music, leaving viewers with the feeling that they are in the same room with her subjects.

In the opening scene of Netizens, Carrie Goldberg, a New York City lawyer and another of the three women profiled the documentary, is investigating a rape that was filmed and posted on the Internet. The 13-year-old victim was forced to leave school in the aftermath. The perpetrator, the same age as the girl, was not reprimanded for his crime. Goldberg, who was once the target of cyber harassment, says that she had to frame her own case against the perpetrator because of the lack of knowledge and experience on the part of law enforcement and legal authorities. She points out that New York state has no laws preventing cyber crimes like those suffered by her client. Her law firm now uses privacy statutes to try such cases.

Anita Sarkeesian, Lowen’s third subject, is the creator of the popular web series “Feminist Frequency” as well as the more recent “Ordinary Women,” a women’s history and biography project. Sarkeesian endures constant cyber harassment and death threats. The documentary depicts the security staff and bomb sniffing dogs that precede her to a speaking engagement. Sarkeesian observes that at times she becomes so habituated to the proliferation of misogynistic comments on her social media accounts that she feels no emotions at all. Her testimony in the documentary includes a moment when, in speaking about misogyny, her voice seems to rise an octave, and her incisive humor fails her. The persistence of misogyny before the Internet era, she says, is what inspired the stories of female iconoclasts and rebels in “Ordinary Women.”

One of Lowen’s experts on cyber harassment, Soraya Chemaly, director of the Women’s Media Center Speech Project, points to research that underpins some of the documentary’s discourse. Recent findings confirm what women have long felt: that gender disparity, the sort that exists in Silicon Valley and in Tina’s milieu of finance and banking—in which the workplace is 80 percent male and 20 percent female—are hotbeds of sexual harassment of all kinds, including online harassment. Danielle Keats Citron, a scholar and the author of Hate Crimes in Cyberspace, explains that cyber crimes have forced women to shut down their social media accounts; she also echoes Chemaly’s concerns that attacks on women have the overall effect of silencing them and of confining them to their homes.

Shortly after a former boyfriend posted false stories of her past as a prostitute, Tina left her job and began sleeping 16 hours a day. She says she stopped attending social events for fear of meeting people who would then discover the stories that were the first to pop up in any Internet search of her name. For five years, Tina worked at low-wage jobs because no one in finance would hire her; finally, she secured a position in her field through the efforts of friends who testified to potential employers about the cyber harassment she endured. Her legal case against her attacker ended with an agreement that he relinquish control of the accounts he had hacked and provide the passwords he had used, allowing Tina to remove his incriminating tales. In a particularly bitter reflection of her plight, she twice points out that her attacker never suffered any repercussions.

One of the most poignant scenes in Netizens is when Goldberg allows Lowen into a storage unit that she has not opened in two years. Boxes are labeled “Tainted” and “Bad Year.” As the lawyer extracts the contents, she recalls her court case against her attacker, a boyfriend of four months. She comes across pairs of wedges that she says she bought for her court appearances on the advice of counsel. Throughout the film, even when she is in jeans investigating the child rape, notebook in hand, Goldberg wears spike heels or high-heeled boots. She will never wear the wedges again, she says; she feels uncomfortable in anything but heels. Near the end of the documentary, while gazing out the window of her new office, Goldberg points to the various courts below that trace the history of her transformation, from a fledgling lawyer and plaintiff in an harassment case, to a successful attorney with an expanding staff that prosecutes harassment cases—for a class of cyber crimes largely unrecognized in case law, or by state and federal legislators.

Source link Read the rest

Here's Everything You're Not Allowed To Post On Facebook

Here’s Everything You’re Not Allowed To Post On Facebook


Facebook has publicly released its most complete community guidelines to date after many years keeping the specific rules its moderators used to govern the platform secret. The update tacked over 5000 more words onto the already unwieldy document, which now includes highly specific examples of banned or heavily regulated content.

Is any of it at all surprising? Well, yes. Most platforms have rules that outline common-sense genres of content they’d rather not be liable for – harassment, hate speech, gore, child endangerment, etc. – and these are no different. But as a reflection of its size and global reach, Facebook’s guidelines include some of the most granular examples of what not to do online. Cannibalism is off limits. “Sexualized massages” are specifically barred. Staged animal fights won’t fly, and neither will videos of animals being processed for food. Images of buttocks or an anus are a no-go, “unless photoshopped on a public figure.”

Facebook may be the only platform (at least that I’m aware of) to specifically flag “crisis actor” conspiracies peddled by malicious wingnuts that target the victims of mass tragedies – an example one hopes other social sites follow.

It’s unfortunate that we’re only seeing this information as part of Facebook’s desperate campaign to win back goodwill after the Cambridge Analytica scandal thrashed user trust. And it’s more unfortunate still that these guidelines are nowhere to be found on the site’s front page, and that if someone happens to navigate to the Community Standards, these rules are divided into 22 separate pages housed within six subsections. So we reprinted them all below where they’re easily searchable:


Every day, people come to Facebook to share their stories, see the world through the eyes of others, and connect with friends and causes. The conversations that happen on Facebook reflect the diversity of a community of more than two billion people communicating across countries and cultures and in dozens of languages, posting everything from text to photos and videos.

We recognise how important it is for Facebook to be a place where people feel empowered to communicate, and we take our role in keeping abuse off our service seriously. That’s why we have developed a set of Community Standards that outline what is and is not allowed on Facebook. Our Standards apply around the world to all types of content. They’re designed to be comprehensive – for example, content that might not be considered hate speech may still be removed for violating our bullying policies.

The goal of our Community Standards is to encourage expression and create a safe environment. We base our policies on input from our community and from experts in fields such as technology and public safety. Our policies are also rooted in the following principles:

Safety: People need to feel safe in order to build community. We are committed to removing content that encourages real-world harm, including (but not limited to) physical, financial, and emotional injury.

Voice: Our mission is all about embracing diverse views. We err on the side of allowing content, even when some find it objectionable, unless removing that content can prevent a specific harm. Moreover, at times we will allow content that might otherwise violate our standards if we feel that it is newsworthy, significant, or important to the public interest. We do this only after weighing the public interest value of the content against the risk of real-world harm.

Equity: Our community is global and diverse. Our policies may seem broad, but that is because we apply them consistently and fairly to a community that transcends regions, cultures, and languages. As a result, our Community Standards can sometimes appear less nuanced than we would like, leading to an outcome that is at odds with their underlying purpose. For that reason, in some cases, and when we are provided with additional context, we make a decision based on the spirit, rather than the letter, of the policy.

Everyone on Facebook plays a part in keeping the platform safe and respectful. We ask people to share responsibly and to let us know when they see something that may violate our Community Standards. We make it easy for people to report potentially violating content, including Pages, Groups, profiles, individual content, and/or comments to us for review. We also give people the option to block, unfollow, or hide people and posts, so that they can control their own experience on Facebook.

The consequences for violating our Community Standards vary depending on the severity of the violation and a person’s history on the platform. For instance, we may warn someone for a first violation, but if they continue to violate our policies, we may restrict their ability to post on Facebook or disable their profile. We also may notify law enforcement when we believe there is a genuine risk of physical harm or a direct threat to public safety.

Our Community Standards, which we will continue to develop over time, serve as a guide for how to communicate on Facebook. It is in this spirit that we ask members of the Facebook community to follow these guidelines.

Violence and Criminal Behaviour

1. Credible Violence

Policy Rationale

We aim to prevent potential real-world harm that may be related to content on Facebook. We understand that people commonly express disdain or disagreement by threatening or calling for violence in facetious and non-serious ways. That’s why we try to consider the language, context and details in order to distinguish casual statements from content that constitutes a credible threat to public or personal safety. In determining whether a threat is credible, we may also consider additional information like a targeted person’s public visibility and vulnerability. We remove content, disable accounts, and work with law enforcement when we believe there is a genuine risk of physical harm or direct threats to public safety.

Do not post:

The following threats:

Credible statements of intent to commit violence against any person, groups of people, or place (city or smaller). We assess credibility based upon the information available to us and generally consider statements credible if the following are present: A target (person, group of people, or place) and; Bounty/demand for payment, or; Mention or image of specific weapon, or; Sales offer or ask to purchase weapon, or; Spelled-out address or named building, or; A target and two or more of the following details (can be two of the same detail): Location; Timing; Method; Any statement of intent to commit violence against a vulnerable person (identified by name, title, image, or other reference) or vulnerable group, including (but not limited to) heads-of-state, witnesses and confidential informants, activists, and journalists

Calls for violence or statements advocating violence against the following targets (identified by name, title, image, or other reference): Any vulnerable person or group including (but not limited to) heads of state, national elected officials, witnesses and confidential informants, activists, and journalists; Public individuals, if credible as defined above; Groups of people or unnamed specific person(s), if credible; Places, if credible; Where no target is specified but a symbol representing the target or a visual of weapons is included

Aspirational and conditional statements of violence against: Any vulnerable groups; Public individuals, if credible (unless the individual is convicted of certain crimes or is a member of a dangerous organisation); Vulnerable person(s), if credible; Groups of people or unnamed specific person(s), if credible.

Read the rest
Deepfake: Fake Obama video calling Trump dipshit is a disturbing trend

Deepfake: Fake Obama video insulting Trump is a disturbing trend


A realistic-looking video that seemed to show former President Barack Obama cussing and calling President Donald Trump a “total and complete dips—,” went viral on Tuesday, bringing attention to the dangers of a controversial video-editing technology that many have called “the future of fake news.”

About halfway through the video, originally published by BuzzFeed, it is revealed that Obama had actually not uttered those words and that they were actually said by “Get Out” director and writer Jordan Peele, whose voice and mouth had been digitally inserted into an original — much less scandalous — video of the former president.

Here’s the full video:

Peele, BuzzFeed, and Monkeypaw Productions used a controversial but widely available software to make the video, in an effort to demonstrate the dangers of “deepfakes,” aka digitally manipulated videos that have the power to “make it look like anyone is saying [or doing] anything at any point in time,” that didn’t actually happen. According to BuzzFeed, the video took 56 hours to make, along with the assistance of a professional video editor.

“So the good news is it still requires a decent amount of skill, processing power, and time to create a really good ‘deepfake,'” said BuzzFeed’s news-media editor, Craig Silverman, in a post that accompanied the video.

Unfortunately, this technology is already being used by nonexperts for nefarious purposes, including inserting the faces of celebrities into pornographic videos, creating, in some instances, very convincing and disturbing results.

Deepfakes are most commonly created with the free AI software, FakeApp, that was popularized in forums dedicated to the sharing of fake videos on Reddit and Discord, and first reported on by Motherboard in December 2017. The software requires a large number of photos of the person whom the user wishes to insert into a video, so celebrities and public figures — like former presidents — have become naturally easy targets.

Even beyond nonconsensual pornography, the greatest potential dangers for this technology have only begun to emerge. Many experts have begun to ask what this technology, along with sophisticated audio editing, could mean for the future of fake news and media in general.

“It may sound basic, but how we move forward in the age of information is going to be the difference between whether we survive or whether we become some kind of f—ed-up dystopia,” says Peele, in unison with the artificial Obama, who eerily and convincingly utters the same words.

Source link Read the rest

Revenge pornography: A victim’s dilemma

Revenge pornography in Zimbabwe: A victim’s dilemma


The allegations are that a young woman committed suicide after her boyfriend posted indiscreet videos of her on the internet.

Whether the story is true or not is not as relevant as the issues it raised.

What does the law say about pornography and even more so revenge pornography?

Is there any recourse for victims given that pornography at whatever level is illegal in Zimbabwe?

Revenge pornography, also called non-consensual pornography, is the unauthorised sharing and distribution of sexual images of another person without their consent.

It is often committed by ex-lovers to spite, hurt and humiliate their victim and maximise shame.

In extreme cases, the graphic images are sent to the victim’s family and friends and in the worst cases may be upload them onto porn websites where they give their contact details in order to maximise the nuisance and damage

The law

Zimbabwe Section 13 of the Censorship and Entertainments Control Act states that the importation, printing, publishing, manufacturing, displaying, selling, offering, keeping for sale any publication, picture or record or playing pornographic material in public is prohibited.

Of importance for these present purposes are the aspects of manufacturing and distribution.

Sadly, in Zimbabwe a victim of revenge porn is also an accomplice to the crime.

Consenting to make a sex tape and participating in the film is illegal and is not about consent in Zimbabwe.

Manufacturing any pornographic content is a crime even if it is consensual.

A victim, who endures the shame and humiliation of revenge porn is not exonerated of the crime merely on the basis that they were violated by the non-consensual sharing of the content.

If the State decides to prosecute the case, all the manufacturers of the pornographic text will be liable in terms of section 13.

In some countries with advanced information technology legislation, revenge pornography has been isolated and criminalised.

Zimbabwe has no stand-alone revenge pornography laws, but victims can get recourse through other delictual claims and sue for damages incurred such as humiliation, pain and suffering.

Censorship Board

The Censorship and Entertainments Control Act (Chapter 10.04) is relevant because it oversees the quality of content to ensure it remains within the bounds of decency.

It is a criminal offence to import, distribute and publish films or material that is not approved by the Censorship Board.

The board has the power to examine publications, books, pictures, films or songs and generally to decide what is or what is not suitable or decent enough for public consumption.

Officially banned content is published by notice in the Government Gazette.

The board considers objections to its decisions.

It plays an advisory role to the Home Affairs ministry on the propriety and decency of public entertainment and publications meant for public consumption.

The minister may override the board and set aside or vary its decisions.

Issuance of entertainment licences

The board issues entertainment licences.

No person is allowed to perform or permit public entertainment acts unless such acts have been approved by the board.

The board shall not approve any public entertainment which in its opinion is contrary to good morals and public decency.

However, the board sometimes allows the performance of questionably raunchy public entertainment acts.

Licences can be withdrawn for breach of the licence terms of issuance.

Exemptions, certificates and licences can be revoked at any time at the board’s discretion.

The board considers objections from applicants and members of the public over any of its decisions.

Possession of prohibited content

The Act prohibits people from keeping “without lawful reason” publications, pictures or recorded material that are indecent or obscene.

A great number of people, especially men, keep loads of pornographic material on their phones.

Some people’s phones, particularly their media galleries, could send them straight to the gallows.

The mere possession of pornographic content is a crime.

Manufacturing, possession and distribution of pornographic content are very serious crimes for which perpetrators can be arrested.

Employers lose out in time wastage, as employees with unrestricted internet access work fewer hours as a result of surfing the internet looking for porn sites.

People have lost jobs and families and entire careers gone down the drain over a few pictures carelessly left on a computer’s desktop or on a shared computer.

In some companies the penalty for viewing pornographic content is instant dismissal.

Definition of indecent and obscene

A thing or act is deemed as indecent if it has the tendency to deprave or corrupt the minds of persons who are likely to be exposed to it or to be influenced in any way.

The Censorship Board’s approval is required if entertainment generally causes public outrage or is repugnant to persons who are likely to experience it.

This applies to images or films depicting horror, cruelty or violence.

Other than morality issues, the board also restricts publications and entertainment likely to be contrary to the interests of defence, public order and security and the economic interests of the State.

Help for the victim

The first thing to do is to immediately flag and report the abuse to the host site.

These days, everything is taking place on social media sites like Facebook, WhatsApp, YouTube, Twitter etc.

Wars are being fought and babies born on social media with up to the minute largely unnecessary details being given.

Victims will have to describe the details of the abuse.

Most decent sites usually pull the offending media down and close or suspend the abusers’ account.

The victim should make a police report and be prepared to also take some responsibility for the manufacturer of the pornographic media.

However, this should not deter them from seeking justice for themselves.

The penalty in such a case would be much less than for the revenge porn.

Despite whatever shame they may feel the victim should seek compensation and civil damages from the abuser

Source link Read the rest

Revenge pornography ban tramples free speech, law tossed out – where else but Texas! • The Register

Revenge pornography ban law tossed out in Texas!


A Texas appeals court last week ruled that the US state’s Relationship Privacy Act, which prohibits the disclosure or promotion of intimate images without the consent of those depicted, is unconstitutional.

Enacted in 2015, the Texas law was intended as a way to stop what’s known as revenge porn, in which a person discloses intimate sexual images, online or otherwise, to cause harm and embarrassment to another person.

The law covered images to those taken under circumstances when the depicted person had a reasonable expectation that the material would remain private.

It’s one of 38 state laws that have been enacted in the US to combat revenge porn, also referred to as nonconsensual distribution of pornographic images because the perpetrator’s motivation – revenge or otherwise – wouldn’t mitigate the act.

In its consideration of an appeal by defendant Jordan Bartlett Jones, accused in a civil complaint last year of revealing a explicit image of a woman without consent, the 12th Court of Appeals in Tyler, Texas, found the privacy law too broadly drawn.

“We have concluded that Section 21.16(b) [of the Relationship Privacy Act] is an invalid content-based restriction and overbroad in the sense that it violates rights of too many third parties by restricting more speech than the Constitution permits,” the appeal court ruled, directing the trial court to dismiss the charges.

Expectation of privacy

The cited section of the law concerns the court because it makes third parties liable for the distribution of covered content even if the defendant had no knowledge of the circumstances under which the image was taken or the privacy expectations of the person depicted.

The court also noted that the law does not consider whether the disclosing person had any intent to do harm.

The impact of the ruling is limited to the 17 Texas counties under the state’s 12th Court of Appeals. The ruling may not stand however: Texas’ Office of the State Prosecuting Attorney intends to have the decision reviewed.

In an email to The Register, Stacy Soule, State prosecuting attorney, said, “I will first seek rehearing in the 12th COA. If unsuccessful, I’ll file a petition for discretionary review in the Court of Criminal Appeals, which is Texas’ court of last resort for criminal cases.”

Several revenge porn laws have run into constitutional problems. Arizona ended up revising its 2014 law following a court challenge the following year. Vermont’s Supreme Court is reviewing Vermont’s revenge porn law. And the Governor of the Rhode Island in 2016 vetoed that state’s revenge porn bill over concerns about its constitutionality.

A federal law, the ENOUGH Act, which stands for Ending Nonconsensual Online User Graphic Harassment, was introduced late last year by U.S. Senators Kamala Harris (D-CA), Richard Burr (R-NC), and Amy Klobuchar (D-MN), in conjunction with Rep. Jackie Speier (D-CA). It awaits consideration by the Senate Judiciary Committee. ®

Source link

Read the rest

Twitter Plans To Prevent Revenge Porn With Stricter Policies, But Will It Work?

Social media has always had problems with filtering and monitoring inappropriate content, but those issues have been extra toxic on Twitter in particular because it’s a unique type of service: it’s built on microblogging, meaning it’s short, it’s quick, and it spreads fast.

It’s not difficult to find hate speech, propaganda, sexism, racism, and all kinds of negative isms on Twitter. Just two or three clicks plus plenty of scrolling and users will immediately see the site’s dark, appalling side — and that’s exactly the problem, how easy it is to get there.

Twitter epicly fails at shutting down inappropriate content, and that hasn’t really changed for a long time. Some users even regard Twitter as an incredibly toxic place on the internet, which is a pretty staggering description for a site that’s not even niche. Is it cancer for the eyes? Not quite — but it’s headed there at an alarming pace.

Twitter has dealt with issues of harassment and toxicity in the past, with nearly always the same results: it fails. But it’s now promising that it will crack down harder on many unsettling activities in the site, specifically “revenge porn.”

Twitter said on Oct. 27 that it would begin imposing stricter rules on sharing sexual photos and videos of other people without their consent, otherwise referred to as revenge porn. The updates follow recent criticism on Twitter’s handling of such incidents. This summer, for instance, celebrity Rob Kardashian posted naked photos of his ex-girlfriend, which remained on Twitter for half an hour before being taken down.

Past rules on revenge porn prohibited people from posting “intimate photos or videos that were taken or distributed without the subject’s consent.” The new rules, on the other hand, provide more clarity on what Twitter deems as inappropriate and how it plans to sanction violators. For one, the new rules now prohibit publishing compromising images of others taken by hidden cameras or other secretive methods. Images and videos captured privately and not intended for publishing are also prohibited.

Still, Twitter says “some forms of consensual nudity and adult content” are allowed.

Accounts that violate the new policies on revenge porn will be suspended once Twitter identifies that the content was distributed without the subject’s permission. Retweeters of the said content will be asked to remove offending tweets and be warned that if they continue violating the policies, their account will be suspended, too.

On paper, the new policies seem straightforward enough, but it’s difficult to gauge whether they’ll be effective. Just consider this: Twitter also doesn’t allow hate speech and harassment to plague the site — how’s that going? It’s not enough to put a sign that says “no revenge porn allowed.” Twitter has to close the doors permanently for those who violate its rules. Until then, the site will continue to be a toxic-laden wasteland of hate and harassment.


Read the rest

Law to help revenge porn victims introduced in Saskatchewan, Canadian province

The Saskatchewan Legislative Building.

Brandon Harder / Regina Leader-Post

The provincial government has introduced legislation that is intended to support victims of “revenge porn.”

The initiative, introduced today by Justice Minister and Attorney General Don Morgan, will create new legal options for people whose intimate images have been shared without their consent. An intimate image is a visual image, including photos or videos, in which a person is nude, partially nude, or engaged in explicit sexual activity, that was made in circumstances that implied a reasonable expectation of privacy.

“This Bill sends a strong message that this callous, criminal behaviour has consequences, and that the Government of Saskatchewan stands with the victims of this type of attack,” Morgan said in a news release.

The Privacy Amendment Act, 2017 will allow a person whose intimate image has been distributed without their consent to sue the person who distributed the image. It will also shift the onus of proof to the person that circulated the image, requiring them to show that they had a reasonable basis to conclude consent had been granted to do so.

Additionally, the amendments will remove the requirement that a lawsuit under The Privacy Act proceed only in the Court of Queen’s Bench. Plaintiffs will have the option to proceed with an action in either small claims or the Court of Queen’s Bench. This will permit plaintiffs in these cases to choose the less expensive and quicker small claims process, where they are claiming damages less than $30,000.

The government believes the amendments complement and support amendments made to the Criminal Code in 2015 to address the distribution of intimate images without consent, and ensure that victims have equal opportunities for redress in both the criminal and the civil spheres of the justice system.

The legislation was expected as the province included a plan to enact it as part of its Throne Speech last week.

Read the rest