Trying to bee nice.

  • 3 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle

  • cura@beehaw.orgtoWorld News@beehaw.orgThe Good News Effect
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    1 year ago

    While news outlets are certainly drivers of fatigue, readers are not entirely off the hook. Research shows that negative headlines have more than a 60 percent higher click-through rate than positive ones—à la the old trope, “if it bleeds, it leads.”

    I always feel that there are way more bad news than good news until now. I made a tally of the posts on the homepage of Beehaw right now and registered 14 as positive, 10 as negative, and 15 as neutral wrt my stance. It just seems like I actively focus more on the bad ones. Maybe I will try reading more positive ones.



  • What spammers want, how they do it, and how to prevent it

    What do spammers want? The main motivation for spam is profit. Spam tends to be very lucrative, even when spammers are just peddling questionable products. That said, there are worse ways that spammers use for financial gain. One such way is phishing, that is, to get sensitive personal information, such as passwords or credit card information, from the user, by pretending to be an important or official source, such as a bank or an IT manager, or promoting a fake offer to grab the user’s attention. With the popularity of social media, there are even phishing techniques focused entirely on creating authentic-looking posts for this exact purpose. Another possible motive for spam is to turn your computer into a zombie. In computer science, a zombie is a computer that has been infected by a virus or a hacker and is now controlled remotely by the attacker, without the user being aware. These infected computers are then used for malicious intent, such as by being used to orchestrate distributed denial-of-service (DDoS) attacks or even to spread more spam online via e-mail spam, ultimately getting more profit in the process. There are also spammers that seek to add links back to their own websites or to misleading offers, in a misguided attempt for higher search engine ranks to those websites. These attempts at linkbuilding are non-recommended SEO tactics that are frowned upon by Google, as they are attempts at tricking both search engines and users by dishonest linkbuilding. Whatever the case may be, spam ultimately boils down to malicious intent, either towards you, your site or your users.





  • Surveys After each song, participants were asked to rank how much they liked the song (1 to 10), if they would replay the song (0, 1), recommend the song to their friends (0, 1), if they had heard it previously to assess familiarity (0, 1), and if they found the song offensive (0, 1). We also showed participants lyrics from the song and lyrics created by the researchers and asked them to identify the song lyrics to measure their memory of the song (0, 1).

    I still think your concern is legitimate.


  • Abstract

    Identifying hit songs is notoriously difficult. Traditionally, song elements have been measured from large databases to identify the lyrical aspects of hits. We took a different methodological approach, measuring neurophysiologic responses to a set of songs provided by a streaming music service that identified hits and flops. We compared several statistical approaches to examine the predictive accuracy of each technique. A linear statistical model using two neural measures identified hits with 69% accuracy. Then, we created a synthetic set data and applied ensemble machine learning to capture inherent non-linearities in neural data. This model classified hit songs with 97% accuracy. Applying machine learning to the neural response to 1st min of songs accurately classified hits 82% of the time showing that the brain rapidly identifies hit music. Our results demonstrate that applying machine learning to neural data can substantially increase classification accuracy for difficult to predict market outcomes.

    So they use synthetic data to both train and test their model, this is because the original dataset contains only 24 songs.

    Next, we assessed the bagged ML model’s ability to predict hits from the original 24 song data set. The bagged ML model accurately classified songs with 95.8% which is significantly better than the baseline 54% frequency (Success = 23, N = 24, p < 0.001).

    So the 97.2% accuracy is reported on the synthetic data. On the original one, it is 95.8%. But the authors do acknowledge the limitations.

    While the accuracy of the present study was quite high, there are several limitations that should be addressed in future research. First, our sample was relatively small so we are unable to assess if our findings generalize to larger song databases.