Nexgate 2013 State of Social Media Spam Research Report

June 1, 2016 | Author: H5F Communications | Category: Types, Presentations
Share Embed Donate


Short Description

The Rise of Social Media: For every7 new social media accounts - 5 fake are found!...

Description

Table  of  Contents     Executive  Summary…………………………………………………………………………………….3   Research  Methodology……………………………………………………………………………….4   Key  Findings……………………………………………………………………………………………….4   Introduction…………………………………………………………………..…………………………...5   Types  of  Social  Spam…………………………………………………………………………………..6   Link  Spam……………………………………………………………………………….……………….6   Text  Spam……………………………………………………………………………….……………....8   Case  Study:  Spam  in  Action……………………………………………………….………………12   Leading  Entertainment  Brand………………………………………………….……..……...12   Major  Sports  League………………………………………………………………………………13   Social  Spam  Communication  Mechanisms…………….…………………………………14   Spammy  Apps.…………………………………………….…………………………………………15   Like-­‐Jacking…………………………………………………..……………………………………...16   Social  Bots……………………………………………………….……………………………………16   Fake  Accounts……………………………………………..…………………………...…………..17   Social  Spam  Trends..………………………………………………………………………………..19   Chart:  Grown  Percentage  of  Spam  vs.  Comments  Across  All  Social……….......20   Conclusion………………………………………………...…………………………………………….20   About  the  Author…………………………………………………………………………………….21   References………………………………………………….………………………….…………...…..21  

Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

Executive  Summary     Spam  has  been  around  since  the  beginning  of  electronic  communication.  Spammers  have   adapted  the  technology  of  the  time  -­‐  whether  the  telephone,  email,  or  social  media  -­‐  to   reach  as  many  users  as  possible  and  to  line  their  pockets.         Today,  social  media  spam  (or  “social  spam”)  is  on  the  rise.  During  the  first  half  of  2013,   there  has  been  a  355%  growth  of  social  spam  on  a  typical  social  media  account.  Spammers   are  turning  to  the  fastest  growing  communications  medium  to  circumvent  traditional   security  infrastructures  that  were  used  to  detect  email  spam.       The  impact  of  social  media  spam  is  already  significant  -­‐  it   can  damage  brand  appearance  and  turn  fans  and  followers   into  foes.  To  make  matters  worse,  a  spammy  social  message   isn’t  just  seen  by  one  recipient,  but  by  potentially  all  of  the   brand’s  followers  and  all  of  the  recipients’  friends.  Social   spam  transforms  one  of  the  greatest  assets  of  social  media   marketing  –  it’s  multi-­‐dimensional  nature  –  against  the   brand.       As  social  media  spam  has  increased,  so  too  have  the   different  types  and  mechanisms  of  its  distribution  across   Facebook,  YouTube,  Google+,  Twitter  and  other  social   networks.  Link  and  text-­‐based  spam  have  evolved  to  adapt   to  the  social  medium.  Link  spam  takes  the  form  of  just  the   URL  with  no  surrounding  text,  prompting  a  curious  and   unsuspecting  user  to  click  on  the  link  to  the  spammer’s   website.  Text  spam  includes  phishing  attacks  that  often  ask   for  personal  information  or  money,  and  “chain  letters,”   which  may  make  a  threat  or  sympathetic  plea  prompting  the  user  to  circulate  the  spam.     Social  media  has  also  led  to  new  methods  of  delivering  spam,  such  as  spammy  apps,  so-­‐ called  “Like-­‐Jacking,”  social  bots,  and  fake  accounts.  Spammy  apps  offer  to  perform  special   tasks  outside  of  social  media  networks’  original  features.  With  Like-­‐Jacking,  instead  of   clicking  on  malicious  links,  victims  may  be  tricked  into  clicking  on  images  that  appear  as   “likes”  or  other  seemingly  harmless  buttons.  Social  bots  and  fake  accounts  are  used  to   infiltrate  the  victim’s  social  media  world.    Together,  these  new  attack  methods  can   significantly  detract  from  a  brand’s  social  media  presence  and  their  social  marketing  ROI.     Nexgate’s  research  team  has  investigated  these  and  other  trends  in  social  media  security,   and  has  revealed  some  interesting  statistics  on  the  fast-­‐growing  social  media  spam   phenomenon.  Our  findings  show,  for  example,  that  only  15%  of  all  social  spam  contain  a   URL  that  security  systems  detected  as  spammy,  and  at  least  5%  of  all  social  media  apps  are   spammy.  We  explore  these  results  and  more  in  the  enclosed  first  annual  2013  State  of   Social  Spam  report  written  by  our  data  scientist  research  team.  

Social  media   spam  has  

risen   355%  

in  the  first  half   of  2013.  

Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

Research  Methodology     This  study  is  based  on  social  data  collected  from  social  media  networks  observed  by   Nexgate,  referred  throughout  this  paper  as  the  “Nexgate  corpus,”  which  was  collected   between  2011  and  2013.  The  social  media  networks  under  study  include  Facebook,   Twitter,  Google+,  YouTube  and  LinkedIn.    The  Nexgate  corpus  contains  over  60  million   pieces  of  unique  content  written  by  over  25  million  social  accounts,  including  the  top  five   most  prolific  and  trafficked  social  media  accounts  for  each  social  media  network  as   determined  by  Socialbakers  [8].  Importantly,  the  observed  social  data  is  the  fraction  of   content  that  was  publicly  available  on  the  aforementioned  social  accounts.    This  means  that   despite  the  significant  increase  in  spam  found,  the  data  in  this  report  is  only  a  fraction  of   the  total  risky  content  and  spam  on  any  account  that  has  been  manually  hidden  or  removed   by  the  owners  of  the  accounts  researched.  The  social  data  includes  all  text  communication   from  each  of  the  social  media  networks,  such  as  wall  posts  and  comments  from  Facebook,   or  tweets  and  retweets  from  Twitter.  We  restrict  our  study  to  public  information  available   from  the  social  media  networks’  API.        

Key  Findings        

Ø During  the  first  half  of  2013  there  has  been  a  355%  growth  of  social  spam.   Ø 5%  of  all  social  media  apps  are  spammy.   Ø 20%  of  all  spammy  apps  are  found  on  a  brand-­‐owned  social  media  account.  

   

Ø Fake  social  media  profiles  post  greater  volumes  of  content  and  more  quickly  than   real  profiles.   Ø Spammers  often  spam  to  at  least  23  different  social  media  accounts.  

 

Ø For  every  7  new  social  media  accounts,  5  new  spammers  are  detected.  

 

     

Ø Facebook  and  YouTube  provide  the  most  spam  content  compared  to  other  social   media  networks.  The  ratio  of  spam  on  Facebook  or  YouTube  to  the  other  social   networks  is  100  to  1.   Ø More  spammers  are  found  on  Facebook  and  YouTube  than  any  other  social   networks.   Ø 15%  of  all  social  spam  contains  a  URL,  often  to  spammy  content,  pornography  or   malware.  

Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

 

Ø Facebook  contains  the  highest  number  of  phishing  attacks  and  personally   identifiable  information  –  more  than  4  times  the  other  social  media  networks.     Ø YouTube  contains  the  highest  number  of  risky  content,  or  content  containing   profanity,  threats,  hates  speech,  and  insults.  For  every  1  piece  of  risky  content  found   on  other  social  media  networks,  there  are  5  pieces  of  risky  content  on  YouTube    

     

Ø The  rate  of  spam  is  growing  faster  than  the  rate  of  comments  on  branded  social   media  accounts.   Ø 1  in  200  social  media  messages  contain  spam,  including  lures  to  adult  content  and   malware  

Introduction     Even  the  telegraph  in  the  late  19th  century  did  not  escape  spammers  (2).  Spam  was   popularized  in  the  late  1990’s  and  early  2000’s  through  email  messages,  such  as  the   infamous  “Viagra  spam  emails.”         These  days,  just  about  every  email  client  comes  equipped  with  a  decent  filter  that  can  stop   most  spam  before  the  end  user  ever  sees  it.  Corporate  spam  gateways  aggregate  traffic  at   the  network  perimeter  and  root  out  most  email  spam  before  it  even  gets  to  the  client.     Thus,  there  are  now  well-­‐developed  infrastructures  to  detect  email  spam,  and  very  little  of   it  gets  through.     To  find  better  payoffs,  spammers  have  turned  to  other  electronic  mediums.  One  such   vulnerable  medium  is  a  “social  network,”  such  as  Facebook,  where  social  network  spam,  or   “social  spam,”  is  more  difficult  to  detect.  Social  spam  is  more  potent  than  email  spam   because  spammers  can  hit   targeted  audiences  more  easily   using  social-­‐network-­‐search  tools.     and   For  instance,  the  new  “Facebook   Graph  Search”  allows  a  user  to   PII  as  the  other  social  networks   precisely  query  a  specific  target   audience.  A  spammer  can  include   parameters  such  as  age,  location,  likes,  interests,  what  brands  a  user  follows,  connections,   and  more,  to  narrow  down  his/her  target  victims.  Additionally,  instead  of  being  seen  only   by  the  recipient  during  an  email  spam,  a  social  spam  may  be  seen  by  the  recipient  and  all  of   the  recipient’s  social-­‐network  followers.  Furthermore,  if  the  recipient’s  content  is  public,   social  spam  can  reach  an  even  wider  audience;  In  fact,  up  to  40%  of  social  media  accounts   have  been  used  to  magnify  and  broaden  spam  distribution  (5).     Perhaps  the  greatest  motivation  for  spamming  is  to  seek  financial  gain.  An  easy  method  to   this  end  is  accomplished  by  attracting  traffic  to  sites  that  contain  advertisements,  or  ads.  A  

Facebook  hosts  4  times   more  phishing  attacks  

Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

spammer  could  be  paid  each  time  an  advertisement  is  clicked,  so  finding  an  efficient   method  to  send  large  volumes  of  traffic  to  a  spammer’s  ad  site  can  generate  a  lot  of  money.   This  method  of  revenue  generation  is  “easy”  for  spammers  because  users  expend  little   energy  when  clicking  an  advertisement.  What’s  more,  most  spam  victims  don’t  realize   exactly  what  they’re  clicking  on,  since  the  lure  can  be  anything  “social”  –  a  picture  of  a  child   or  something  fluffy  and  cute,  like  a  cat.  This  highlights  another  reason  why  brand’s  need  be   relentless  in  removing  spam  from  their  accounts  as  it  represents  a  triple  threat  to   marketing  ROI  if  a  pages’  audience  clicks  on  the  spammer’s  ad  instead  of  the  brand’s  ad.   Basically,  it  means  the  brand  loses  their  focused  advertising  opportunity,  the  spammer  gets   a  chance  to  improve  their  website  rank  at  the  expense  of  the  brand,  and  the  brand  hurts   trust  with  their  audience  by  letting  them  be  victimized.     Other  traditional  spamming  methods  involve  “phishing  attacks,”  which  include  obtaining   the  victim’s  passwords  or  credit  card  details  or  injecting  “malware,”  which  is  software   installed  on  a  user’s  computer  to  gather  sensitive  information.  These  latter  methods  are   less  popular  (but  still  frequently  seen)  since,  to  proceed,  they  require  more  effort  from  the   spammer  and  the  user.  However,  if  successful,  they  open  opportunities  to  extract  greater   financial  rewards  from  the  victim.     Social  spam  makes  use  of  all  of  these  traditional  spamming  methods  seen  in  email  spam,   but  given  the  possibilities  of  the  social  network  medium,  the  set  of  mechanisms  to  spread   spam  are  immensely  expanded.  Social  spam,  for  example,  gets  distributed  to  hundreds,   thousands,  and  even  millions  of  people  with  one  post.    Email  spam,  by  comparison,  is  one-­‐ to-­‐one,  requiring  significantly  more  effort  and  with  much  higher  barriers.  While  new,  social   spam  marks  the  next  phase  of  attack  engineering  by  the  ‘bad’  guys.  In  this  paper,  we  will   explore  social  spam  in  detail.      

Types  of  Social  Spam  

  There  are  numerous  types  of  social  spam  strung  across  Facebook,  YouTube,  Google+,   Twitter,  and  the  other  social  networks.    The  two  most  frequent  include  link  and  text  spam,   and  are  described  in  detail  below.    

Link  Spam    

  This  type  of  spam  may  be  observed  to  be  just  a  single  link  with  no  surrounding  text.  The   curious  and  unsuspecting  user  may  click  the  link,  which  would  send  the  user  to  a   spammer’s  website.  The  website  may  contain  ads,  which  could  generate  revenue  for  the   spammer,  or  install  malware,  but  the  typical  benefits  of  link  spam  helps  spamdexing.       Spamdexing  is  a  deceptive  technique  that  increases  the  spammer’s  website  rank  in  search   results.  To  entice  the  user,  there  may  be  a  short  phrase  accompanying  the  link  that   promises  easy  money,  pills,  porn,  etc.  Otherwise,  to  remain  mysterious,  the  link  can  be  very   vague.  Here  are  some  examples,  from  the  Nexgate  corpus  of  text  accompanying  link  spam:   Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

                 

 

Another  method  of  remaining  mysterious  or  vague  is  to  shorten  the  link  altogether  without   revealing  where  the  link  is  pointing.  As  more  people  share  legitimate  content  through   Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

shortening-­‐URL  services,  such  as  bitly  (bitly.com)  and  TinyURL  (tinyurl.com),  determining   spammy  links  becomes  even  more  challenging.  These  links  can  also  automatically  send   similarly  spammy  links  to  all  of  a  user’s  Twitter  contacts.     As  described  above,  email  infrastructure  is  typically  advanced  enough  to  filter  many  of   these  messages,  including  methods  to  black  list  URLs  or  filter  text.  For  social  media,   however,  few  technologies  exist  to  identify,  classify,  and  remove  spammy  content  and   URLs,  especially  accurately,  and  many  organizations  today  unnecessarily  rely  on  manual,   human  review  of  every  post  and  comment  (which  is  extremely  costly,  time  consuming,  and   error-­‐prone),  or  simply  have  no  defense  and  leave  their  followers  to  be  victimized.     Text  Spam       When  given  the  chance  to  manifest  their  spam  through  engaging  text,  spammers’  content   can  become  outright  captivating.  One  such  example  is  a  “chain  letter.”  This  type  of  spam   threatens  the  recipient  to  distribute  the  message  to  as  many  people  as  possible  or   something  horrible  will  happen.  In  some  cases,  the  message  may  even  be  positive  (e.g.,  “$1   is  given  to  cancer  research  for  every  share  or  like”).  These  chain  letters  can  contain  a   request  to  send  money  to  the  original  sender.  An  example  of  a  chain-­‐letter  spam,  found  in   the  Nexgate  corpus,  is  given  here:    

Other  types  of  text  spam  request  the  recipient  to  respond  to  the  spammer  via  a  private   message  in  order  to  obtain  “more  information.”  These  are  typically  “work-­‐from-­‐home”   Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

 

schemes  that  promise  easy  money.  The  spammer  typically  extorts  money  from  the  victim   by  charging  a  fee  to  join  the  program,  or  by  selling  overvalued  products.  The  text  may  have   an  accompanying  picture  designed  to  further  attract  the  attention  of  the  victim.  Such   examples  of  these  messages,  observed  in  the  Nexgate  corpus,  are  included  here:    

 

Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

    The  author  of  the  following  “work-­‐from-­‐home”  spam  knows  that  his  message  is  too  good  to   be  true,  and  he  understands  that  you  might  be  doubtful  about  his  claims.  By  deceptively   admitting  that  most  “work-­‐from-­‐home”  schemes  are  a  scam,  the  spammer  aims  to  earn   your  trust  by  giving  you  advice  on  how  to  avoid  other  “work-­‐from-­‐home”  schemes.   However,  the  message  itself  is  nothing  but  another  “work-­‐from-­‐home”  scheme.    

Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

    Text  spam  may  be  used  as  phishing  attacks,  where  the  recipient  is  asked  to  verify  their   account  using  their  credentials.  These  phishing  attacks  allow  the  perpetrator  to  gather   identification  information  from  the  victim,  which  may  then  be  used  to  gain  access  to  other   accounts,  such  as  bank  accounts.  A  few  examples  of  these  seemingly  legitimate  but   exploitative  attacks  are  shown  here  [9]:    

Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

Because  this  type  of  spam  lives  entirely  within  social  networks,  traditional  spam   technologies  have  no  interception  point  or  way  to  detect  or  deal  with  it.       Regardless  of  the  spam  type,  much  of  it  is  distributed  on  popular  social  media  pages,  and   embedded  deep  within  the  comments  of  a  particular  post.  Spammers  hide  their  content   here  so  it’s  not  easily  noticed  by  the  brand  and  community  managers  that  patrol  their   pages,  but  leverage  the  broad  reach  /  following  of  big  brands  so  they  can  target  the  greatest   number  of  people  possible.    What’s  more,  by  tailoring  their  message,  spammers  can  engage   the  interests  of  the  brand’s  followers  –  in  a  particular  show,  product  or  celebrity,  for   example,  thus  increasing  the  spammer’s  click  rate.      

Spam  In  Action  

  To  provide  an  example  of  spam  in  action,  we’ve  detailed  the  spam  facing  two  well-­‐known   brands,  described  below.    We  have  kept  the  brands  anonymous.  

  Entertainment  Pioneer  Leading  the  Way  In  Spam  Too  

  The  first  example  is  a  company  that  is  a  leading  media  and  entertainment  firm.  This   company  has  built  one  of  the  largest  online  social  communities  across  Facebook,  YouTube,   and  Twitter.    The  brand  contains  hundreds  of  social  media  accounts,  with  roughly  50   million  “Likes”  on  their  busiest  Facebook  Page,  and  240  thousand  weekly  unique  posts.       Given  the  popularity  of  this  brand,  this  Facebook  Page   contains  a  large  volume  of  spam  content  –  1  in  7  comments   contain  spam  content.  About  3%  of  the  spam  found  on  this   Page  contains  a  spam  link,  and  about  1.5%  contains  malware.   The  most  frequent  type  of  spam  includes  “work-­‐from-­‐home”   schemes,  which  are  distributed  through  many  types  of   spammy  applications.  These  applications  range  from  simple   publishing  applications  used  on  the  desktop  to  applications   found  on  smartphones.    Other  apps  used  to  spam  are  created   specifically  for  that  purpose  –  these  types  of  apps  and  their   examples  are  discussed  and  shown  in  the  next  section  (Spam   Communication  Methods).       As  discussed,  few  spam-­‐fighting  technologies  are  developed  and  available  today.  Since   there  is  no  defined  workflow  or  policy  enforcement  for  detecting  spam  that’s  native  to  the   social  media  networks,  most  accounts  are  at  risk  of  spammer’s  attacks.  As  more  spam   content  is  seen,  the  potential  for  the  brand  and  its  message  to  be  diluted  is  increased,  and   trust  is  eroded  with  followers  and  fans.  Because  the  brand  is  not  protecting  against   spammers  or  fake  accounts,  it  is  also  wasting  financial  resources  in  advertising  campaigns  

1  in  7  

social  posts   contain  spam  

Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

and  promotional  material,  since  spammers  and  fake  accounts  provide  meaningless  Likes   and  comments.  

Growth  Rate  of  Spam  vs.  Comments   Entertainment  Pioneer  

Comments  

Spam  

    As  seen  from  the  graph  above,  which  is  plotted  over  a  two-­‐month  period,  the  growth  rate  of   social  spam  for  this  social  account  is  increasing  faster  than  the  growth  rate  of  comments.   More  specifically,  while  the  rate  of  comments  is  growing  linearly,  the  rate  of  social  spam  is   growing  exponentially.  During  the  month  of  April  2013,  the  number  of  posts  and  comments   on  the  brand’s  social  account  grew  about  20%,  with  an  increase  in  spam  of  5%.  During  May   2013,  content  grew  by  approximately  68%,  but  spam  grew  to  around  60%.  Therefore,  even   though  the  social  media  brand  was  taking  appropriate  action  to  increase  social  media   activity  and  brand  awareness,  they  were  not  able  to  control  the  social  media  spam  seen  on   their  account.  Thus,  not  only  did  the  rate  of  the  social  spam  increase,  the  rate  of  social  spam   grew  faster  than  the  rate  of  posts  and  comments,  which  added  to  the  dilution  of  brand   reputation.      

Sports  League  Loses  Out  on  Spam     In  another  example  of  a  social  media  account  with  a  large  volume  of  social  spam,  we  turn  to   the  social  media  account  of  a  leading  sports  league.  This  brand  has  built  a  social  community   with  roughly  18  million  subscribers,  and  contains  about  500,000  weekly  unique  posts  or   social  media  activity.  About  1  in  4  posts  is  spam,  and  1  in  11  comments  contain  hate   speech.    Because  this  brand  has  more  spam,  we  can  see  its  impact  clearly  as  it  erodes  trust   among  its  users.  In  fact,  this  same  brand,  which  boasts  so  much  hate  speech  on  its  pages  –   nearly  double  that  of  the  above  pioneer  in  media  entertainment  spends  significant  resource   condemning  this  same  language  via  its  public  relations  team.     Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

Over  a  two-­‐month  period  for  the  social  media  account  of  this  sports  league,  spam  grew   roughly  35%  percent  while  the  number  of  quality  content  grew  35%.  As  the  numbers  show,   the  more  the  brand  owning  this  social  media  account  grows  in  activity,  the  more  abuse  they   unleash  on  their  audience,  and  the  more  they  increase  their  opportunity  cost  and  decrease   marketing  ROI.      

Social  Spam  Communication  Mechanisms    

Spammy  Apps    

  A  new  breed  of  spam  mechanisms  exists  on  social  media  networks,  which  takes  the  form  of   downloading  an  application  or  app.  These  apps  offer  to  perform  special  tasks  that  a  typical   social  media  platform  is  unable  to  do,  such  as  determining  the  number  of  profile  views  by   other  users  or  changing  the  color  theme  of  a  user’s  social  media  account.  As  with  other   spam  types,  the  app  may  promise  the  collection  of  easy  money.  Once  these  apps  are   installed,  malicious  software  or  phishing  attacks  can  proceed  to  exploit  the  victim.  The   names  of  a  few  nuisance  apps  are  given  below:     • Timeline  Stalkers   • Profile  Peekers   • Change  Your  Color   • FREE  Gift  Cards     Typical  content  that  accompany  these  apps  includes:    

 

 

 

 

 

 

 

  Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

    Using  technology,  you  can  detect  social  spam  apps.    Nexgate  has  found,  for  example,  that  at   least  5%  of  all  apps  are  spammy,  and  that  20%  of  all  spammy  apps  are  found  on  a  brand-­‐ owned  social  media  account.    

Like-­‐Jacking    

With  Like-­‐Jacking,  instead  of  clicking  on  links,  victims  may  be  tricked  into  clicking  on   images  that  appear  as  “Likes”  or  other  buttons  that  are  typically  harmless.  The  victim  can   either  be  taken  to  a  website  hosted  by  a  spammer  described  in  the  previous  section,  or  the   “liked”  content  can  appear  at  the  top  of  their  “news  feed,”  unbeknownst  to  the  victim  since   this  activity  is  not  generally  advertised  back  to  the  user.      

  Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

  A  similar  method  is  to  use  pictures  that  entice  the  victim  to  click  through,  leading  to  similar   effects  discussed  above.    

    Additionally,  another  method  that  separates  social  spam  from  other  forms  of  spam  is  that   profile  pictures  can  entice  users  to  click  on  them,  with  links  to  sites  that  can  either  install   malicious  content  or  generate  more  “click”  jacking.  Here  is  an  example  of  comments  from   YouTube  that  attempt  to  attract  users  to  click  through:    

  Clicking  on  any  of  these  profiles  leads  to  a  page  similar  to:    

 

    The  user  is  then  tempted  to  click  the  link  on  the  profile  picture,  which  leads  the  victim  into   the  spammer’s  trap.    

Social  Bots  

  Social  bots  are  prevalent  among  the  social  media  networks.  Using  computer  scripts,   programmers  can  quickly  create  profiles  that  have  more  “influence”  than  Oprah  Winfrey   (6).  Social  bots  can  automatically  respond  to  certain  posts.  To  demonstrate  the  existence  of   Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

social  bots,  we  look  at  a  recent  case  that  shows  the  automation  of  replying  via  a  social   media  account.  The  following  exchange  was  observed  on  Bank  of  America’s  Twitter   account  in  early  July  2013.  As  a  user  “tweeted”  that  he  was  being  chased  by  police  near   Bank  of  America  HQ,  the  twitter  account  on  Bank  of  America  detected  this  and  “retweeted,”    

   Although  this  particular  message  is  benign  and  intends  to  be  helpful,  one  could  imagine  the   efficiency  of  distributing  spam  messages  through  a  social  bot.  When  a  social  bot  is  turned   on,  it  can  automatically  reply  with  any  of  the  above  spam  content  discussed  in  the  previous   sections.  Furthermore,  these  social  bots  can  be  designed  to  automatically  request  to   become  “friends”  or  “followers”  when  it  discovers  a  new  social  media  user,  or  they  can  be   used  to  connect  to  brand  accounts.      

Fake  Accounts    

Fake  accounts  are  social  media  accounts  that  are  created  to  resemble  a  “real”  account.  On   the  surface,  the  account  may  post  benign  content  and  photos,  and  may  have  friends  or   followers  that  post  similarly  benign  content  and  photos  to  their  account.  However,  this   makes  spam  originating  from  these  fake  accounts  harder  for  the  recipient  to  discern.  If  a   message  such  as  the  following  were  to  originate  from  a  fake  account  designed  to  be  seem   like  a  real  person,  a  social  media  user  might  be  more  inclined  to  believe  it:     Many  fake  accounts  are  sold  on  the   underground  social  media  market.  Such   services  can  be  bought  for  a  small  fee,  and  are   easily  discovered  through  search  engines.     Some  of  these  accounts  may  be  real  accounts   that  are  compromised,  but  are  ultimately  used   for  the  same  purposes.     Using  Nexgate’s  analytics  tools,  we  were  able  to   determine  some  common  traits  that  fake   accounts  share.  We  collected  a  random   sampling  of  200  fake  accounts  (profiles)  and   200  real  accounts.  Looking  through  the  3   months  before  mid-­‐August  2013,  we  were  able   to  determine  that  activity  from  fake  accounts  is   Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

quite  different  from  the  activity  seen  in  real  accounts.  While  fake  profiles  collected  posted   high  volumes  of  content  over  a  period  of  several  days,  real  profiles  tended  to  post  content   evenly  per  day  over  the  entire  3-­‐month  range.     Fake  Profile  Content                     Real  Profile  Content                     We  also  observed  posting  behavior  that  tended  to  vary  greatly  between  fake  accounts  and   real  accounts.  In  one  example  (see  below),  we  see  that  the  fake  account  posted  the  same   content  at  the  same  time  on  their  own  account  and  others’  account.          

                  Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

Social  Spam  Trends     There  are  claims  by  other  studies  that  social  spammers  may  never  get  caught  (3,  4).  This   may  seem  entirely  plausible.    Even  the  largest  social  networks  have  few  security  resources.   Twitter,  for  example,  has  hired  only  2  “spam  science  programmers”  (7)  out  of  their  staff  of   750  to  fight  spam.  We  believe,  however,  that  social  spam  and  spammers  can  be  caught,   especially  since  we  have  been  able  to  accurately  identify  them  with  Nexgate’s  technology.       Facebook’s  “EdgeRank  algorithm”  (1)  assigns  to  each  post  a  score  based  on  the  number  of   Facebook  “Likes,”  comments,”  or  “shares”  by  others.  In  other  words,  the  more  people  care   about  a  user’s  posts,  the  higher  that  user’s  total  EdgeRank  score.  Although  on  the  surface   this  might  have  (hypothetically)  the  tenets  of  spam  detector,  since  posts  by  spammers  may   not  often  be  “shared”  or  “Liked”  -­‐  educated  people  may  realize  the  spammer’s  intent  after   following  links  in  the  spam  or  notice  something  awry  in  the  post,  just  because  a  post  isn’t   “Liked”  or  “shared”  doesn’t  mean  its  spam.     Another  possible  shortcoming  of  this  algorithm  is  that  spammers  may  join  their  own   networks  of  spammers,  as  discussed  above,  that  continuously  “Like”  and  “share”  each   other’s  comments  and  thus  outsmart  the  EdgeRank  algorithm.       Because  of  Nexgate’s  proprietary  and  patent-­‐pending  ability  to  detect  spam  across  not  only   social  accounts  within  the  same  social  media  platform,  but  also  across  different  social   media  platforms,  many  interesting  observations  about  social  spam  can  be  made.  For   instance,  we  observed  spammers  who  targeted  23  different  social  media  accounts   simultaneously.  Additionally,  for  every  7  new  social  media  accounts  observed,  5  new   spammers  are  detected.  Some  other  observations  include:     • Facebook  and  YouTube  provide  the  most  spam  content  compared  to  other  social   media  networks.  For  every  1  comment  of  spam  found  on  other  social  media   networks,  there  are  100  spam  comments  on  Facebook  or  YouTube.       • As  expected  from  the  previous  result,  more  spammers  are  found  on  Facebook  and   YouTube  than  any  other  social  networks.       • Facebook  contains  the  highest  number  of  phishing  attacks  and  personally   identifiable  information,  by  a  factor  of  4  compared  to  other  social  media  networks.         • YouTube  contains  the  highest  number  of  risky  content,  or  content  containing   profanity,  threats,  hates  speech,  and  insults.  For  every  1  piece  of  risky  content  found   on  other  social  media  networks,  there  are  5  pieces  of  risky  content  on  YouTube           Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

Instead  of  looking  at  the  rate  of  growth  of  spam  vs.  comments  for  a  particular  social  media   account,  we  now  turn  our  attention  to  rate  of  growth  of  spam  vs.  comments  for  all  social   media  accounts  in  the  Nexgate  corpus.    

Grown  Percentage  of  Spam  vs.  Comments   Across  All  Social  Accounts  

Comments  

Spam  

  As  we  saw  in  our  case  study,  the  rate  of  spam  is  growing  faster  than  the  rate  of  comments.   A  slight  kink  towards  the  bottom  left  of  the  red  curve  suggests  that,  perhaps  for  a  brief   time,  social  spam  was  growing  slower  than  comments  in  general.  However,  either  through   a  new  strategy  or  becoming  smarter  than  social  networks’  standard  detection  networks,   social  spam  has  grown  significantly  faster  than  comments.        

Conclusion     Spam  has  been  with  us  for  a  long  time  –  through  the  evolution  of  email,  the  telephone,  and   now  our  social  media.    It’s  no  surprise  that  the  ‘bad  guys’  are  targeting  today’s  most   population  dense  communication  medium;  however,  until  now,  few  have  truly  investigated   the  methods  of  these  new  age  spammers,  or  developed  technology  to  adequately  address   the  problem  on  behalf  of  the  social  networks  and  the  brands  and  fans  that  enjoy  them.     The  same  expertise  and  research  used  for  this  study  also  powers  the  detection  and   enforcement  engines  of  the  Nexgate  product  suite.    Nexgate  is  the  leading  provider  of  social   media  security  and  compliance  with  automated  detection,  classification,  and  removal  of   spam,  malicious,  and  inappropriate  content  across  all  major  social  media  platforms.    Our   patent-­‐pending  technology  seamlessly  connects  to  social  networks  to  remove  unauthorized   content  and  protect  your  brand  and  followers.       To  learn  more  about  how  Nexgate  can  help  your  brand  automate  social  media  security  and   tackle  the  problem  of  social  media  spam,  visit  nexgate.com.   Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

 

About  Nexgate     Nexgate  provides  cloud-­‐based  brand  protection  and  compliance  for  enterprise  social  media   accounts.  Its  patent-­‐pending  technology  seamlessly  integrates  with  the  leading  social   media  platforms  and  applications  to  find  and  audit  brand  affiliated  accounts,  control   connected  applications,  detect  and  remediate  compliance  risks,  archive  communications,   and  detect  fraud  and  account  hacking.     Nexgate  is  based  in  San  Francisco,  California,  and  is  used  by  some  of  the  world’s  largest   financial  services,  pharmaceutical,  Internet  security,  manufacturing,  media,  and  retail   organizations  to  discover,  audit  and  protect  their  social  infrastructure.        

About  the  Author     Harold  Nguyen  is  a  data  scientist  at  Nexgate,  and  has  years  of  experience  fighting  spam.    His   areas  of  expertise  include  Machine  Learning,  Statistical  Analysis,  and  Algorithms  Research.   Harold  holds  a  Ph.D.  in  physics  from  U.C.  Riverside,  and  a  B.A.  from  Berkeley,  and   conducted  research  with  the  Compact  Muon  Solenoid  Experiment  at  the  Large  Hadron   Collider  located  in  Geneva,  Switzerland.  He  is  passionate  about  social  media,  security  and   Big  Data.      

References     (1) "EdgeRank: The Secret Sauce That Makes Facebook's News Feed Tick". TechCrunch. 2010-04-22. Retrieved 2012-12-08.   (2)  Getting the message, at last". The Economist. 2007-12-14. (3) http://blog_impermium_com.s3.amazonaws.com/wpcontent/uploads/2011/10/Impermium_Halloween_Small2.jpg (4)  http://www.itworld.com/it-­‐managementstrategy/264648/social-­‐spam-­‐taking-­‐over-­‐internet   (5)  http://www.businessweek.com/articles/2012-05-24/likejacking-spammers-hit-social-media   (6)  http://www.nytimes.com/2013/08/11/sunday-­‐review/i-­‐flirt-­‐and-­‐tweet-­‐follow-­‐me-­‐at-­‐ socialbot.html?emc=eta1&_r=1&   (7)   http://online.wsj.com/article/SB10001424052970203686204577112942734977800.html?cb=logged 0.9653948666527867&cb=logged0.13351966859772801   (8)  http://www.socialbakers.com   (9)  http://nakedsecurity.sophos.com/2011/02/01/facebook-­‐will-­‐close-­‐all-­‐accounts-­‐today-­‐rogue-­‐app-­‐ spreads-­‐virally/  

Nexgate    |    nexgate.com    |    [email protected]    |    +1  (650)  762-­‐9890   @NXGate    facebook.com/NXGate    linkedin.com/company/NXGate  

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF