To be clear, Google will still be storing copies of the pages they crawl. They just won’t be making those copies available to end users.
To be clear, Google will still be storing copies of the pages they crawl. They just won’t be making those copies available to end users.
Microsoft tried to shanghai me to the “new outlook”. When I realized the scope of what they were trying to do, under the guise of a simple software update, I was floored. I don’t even think Google, with all of their Borg-ish tendencies, would attempt such a blatant hijacking of user data. The privacy implications are profound.
This situation seems analogous to when air travel started to take off (pun intended) and existing legal notions of property rights had to be adjusted. IIRC, a farmer sued an airline for trespassing because they were flying over his land. The court ruled against the farmer because to do otherwise would have killed the airline industry.
While this is amazing and all, it’s always seemed to me that this approach of using hundreds of laser beams focused on a single point would never scale to be viable for power generation. Can any experts here confirm?
I’ve always assumed this approach was just useful as a research platform – to learn things applicable to other approaches, such as tokamaks, or to weapons applications.
It amazes me that there are so many people who buy a printer, are offered this “pay $x a month for Y pages” type of plan, and say yes. I mean, sure, HP sucks, but they wouldn’t be able to get away with such slimy business practices if there weren’t so many people willing to pay.
I’m with you. Also, it seems like it would be much more efficient to do carbon capture at the source, where the fuel is being used, like a power plant, where the concentrations are relatively high, compared to atmospheric capture where CO2 is less than 0.1%.
I wish Apple wouldn’t restrict them as much as they do.
I’m seeing a lot of commenters shitting on Texas here, and while it’s not completely undeserved, I’d like to point out that Texas is 1st in the nation in wind power generation. Texas will implement things – even “Blue” things – if the economics make sense.
I feel like this is the ad-equivalent of the sub-prime mortgage situation, pre-crisis. With mortgages, you had loans that no individual bank or bank manager would want, and then you had an automated process that obfuscated the individual loan details and produced financial products that could be sold as high quality. In the ad world, it’s the same thing. You have these websites that nobody would buy ads from, individually, but somehow, through an automatic process offered by Google and friends, the worthless product becomes valuable.
The DEA says that “manufacturers only sold approximately 70 percent of their allotted quota”, but we don’t get our medications from “manufacturers”; we get our medication from pharmacies, who often only carry 1 generic version of each medication in addition to the brand version. What percentage of generic manufacturers have hit their quota? My guess is that most of the slack is held by manufacturers of brand-name medication, while most of the limits are hitting manufacturers of generics.
Also, the “70 percent” stat is for “amphetamine products”, but what is that referring to, exactly? Adderall? Vyvanse, which is an amphetamine prodrug? All stimulants? (It wouldn’t surprise me, coming from the agency that likes to refer to all illegal drugs as “narcotics”.)
This narrative that the FDA and DEA is pushing – that manufacturers are somehow deciding not to make and sell medication that there is obvious demand for – does not pass the smell test. Maybe the DEA quotas aren’t to blame, but the notion that drug companies are deciding not to make and sell medication – the one thing we’ve been able to count on them to do historically – for some unknown reason that nobody is able to figure out is ludicrous.
That’s a really great idea. Makes a lot more sense than relying on official accounts on 3rd party platforms like Twitter, Reddit, and Facebook.
If there was an easy answer, someone would have implemented it already. Obviously, it’s a challenging problem, and I don’t claim to have the solution.
I think expanding the voting dimensions (a la Slashdot) would make it easier to create an algorithm, but it pushes complexity to the user, so that’s a tradeoff.
But, even with up/down votes, I think there are potential ways of identifying users whose votes deserve more weight. For instance, someone who up-votes both sides of an argument chain (because both sides are making good-faith responses and adding to the conversation) should be boosted.
We need the karma-equivalent of PageRank. Every vote should not be treated the same, just as Google doesn’t weight every link equally. The “one user one vote” system is the equivalent of pre-Google search engines that would rank pages by how many times they contained the search term. But it can’t be as simple as “votes from higher-karma users are worth more” because the easiest way to build insane karma is to build a bot or spam low-effort replies to every rising post. Still, the system needs to be able to extract the wisdom of the crowd from the stupidity of the crowd, and the only way to do that is to apply a weighting gradient to users and their votes.
Exactly! Back in the day, you had two options: (1) subscribe or (2) buy a single magazine or newspaper. Now, there’s no equivalent to the newsstand for digital media.