Technology

37775 readers
326 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
1
 
 

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
3
 
 

Python security developer-in-residence decries use of bots that 'cannot understand code'

Software vulnerability submissions generated by AI models have ushered in a "new era of slop security reports for open source" – and the devs maintaining these projects wish bug hunters would rely less on results produced by machine learning assistants.

Seth Larson, security developer-in-residence at the Python Software Foundation, raised the issue in a blog post last week, urging those reporting bugs not to use AI systems for bug hunting.

"Recently I've noticed an uptick in extremely low-quality, spammy, and LLM-hallucinated security reports to open source projects," he wrote, pointing to similar findings from the Curl project in January. "These reports appear at first glance to be potentially legitimate and thus require time to refute."

Larson argued that low-quality reports should be treated as if they're malicious.

As if to underscore the persistence of these concerns, a Curl project bug report posted on December 8 shows that nearly a year after maintainer Daniel Stenberg raised the issue, he's still confronted by "AI slop" – and wasting his time arguing with a bug submitter who may be partially or entirely automated.

In response to the bug report, Stenberg wrote:

We receive AI slop like this regularly and at volume. You contribute to [the] unnecessary load of Curl maintainers and I refuse to take that lightly and I am determined to act swiftly against it. Now and going forward.

You submitted what seems to be an obvious AI slop 'report' where you say there is a security problem, probably because an AI tricked you into believing this. You then waste our time by not telling us that an AI did this for you and you then continue the discussion with even more crap responses – seemingly also generated by AI.

Spammy, low-grade online content existed long before chatbots, but generative AI models have made it easier to produce the stuff. The result is pollution in journalism, web search, and of course social media.

For open source projects, AI-assisted bug reports are particularly pernicious because they require consideration and evaluation from security engineers – many of them volunteers – who are already pressed for time.

Larson told The Register that while he sees relatively few low-quality AI bug reports – fewer than ten each month – they represent the proverbial canary in the coal mine.

"Whatever happens to Python or pip is likely to eventually happen to more projects or more frequently," he warned. "I am concerned mostly about maintainers that are handling this in isolation. If they don't know that AI-generated reports are commonplace, they might not be able to recognize what's happening before wasting tons of time on a false report. Wasting precious volunteer time doing something you don't love and in the end for nothing is the surest way to burn out maintainers or drive them away from security work."

Larson argued that the open source community needs to get ahead of this trend to mitigate potential damage.

"I am hesitant to say that 'more tech' is what will solve the problem," he said. "I think open source security needs some fundamental changes. It can't keep falling onto a small number of maintainers to do the work, and we need more normalization and visibility into these types of open source contributions.

"We should be answering the question: 'how do we get more trusted individuals involved in open source?' Funding for staffing is one answer – such as my own grant through Alpha-Omega – and involvement from donated employment time is another."

While the open source community mulls how to respond, Larson asks that bug submitters not submit reports unless they've been verified by a human – and don't use AI, because "these systems today cannot understand code." He also urges platforms that accept vulnerability reports on behalf of maintainers to take steps to limit automated or abusive security report creation.

4
5
6
 
 

Archived version

Researchers at the Lookout Threat Lab have discovered a surveillance family, dubbed EagleMsgSpy, used by law enforcement in China to collect extensive information from mobile devices. Lookout has acquired several variants of the Android-targeted tool; internal documents obtained from open directories on attacker infrastructure also allude to the existence of an iOS component that has not yet been uncovered.

  • EagleMsgSpy is a lawful intercept surveillance tool developed by a Chinese software development company with use by public security bureaus in mainland China.
  • Early samples indicate the surveillance tool has been operational since at least 2017, with development continued into late 2024.
  • The surveillanceware consists of two parts: an installer APK, and a surveillance client that runs headlessly on the device when installed.
  • EagleMsgSpy collects extensive data from the user: third-party chat messages, screen recording and screenshot capture, audio recordings, call logs, device contacts, SMS messages, location data, network activity.
  • Infrastructure overlap and artifacts from open command and control directories allow us to attribute the surveillanceware to Wuhan Chinasoft Token Information Technology Co., Ltd. (武汉中软通证信息技术有限公司) with high confidence.
  • EagleMsgSpy appears to require physical access to a target device in order to activate the information gathering operation by deploying an installer module that's then responsible for delivering the core payload.

Connections to other Chinese Surveillanceware Apps

Infrastructure sharing SSL certificates with EagleMsgSpy C2 servers was also used by known Chinese surveillance tools in earlier campaigns, the report says.

A sample of CarbonSteal - a surveillance tool discovered by Lookout and attributed to Chinese APTs - was observed communicating with another IP tied to the EagleMsgSpy SSL certificate, 119.36.193[.]210. This sample, created in July 2016, masquerades as a system application called “AutoUpdate”.

In a 2020 threat advisory, Lookout researchers detailed CarbonSteal activity in campaigns targeting minorities in China, including Uyghurs and Tibetans.

Significant overlap in signing certificates, infrastructure and code was observed between CarbonSteal and other known Chinese surveillance, including Silkbean, HenBox, DarthPusher, DoubleAgent and PluginPhantom.

7
 
 

cross-posted from: https://beehaw.org/post/17509380

Archived

Here is the full report (pdf, 28 pages)

China is rapidly advancing its global propaganda strategies through international communication centers (ICCs), with over 100 centers established since 2018 — most since 2023. These centers aim to amplify the Chinese Communist Party's (CCP) voice on the international stage, targeting specific audiences with tailored messaging (a strategy known as “precise communication”). ICCs coordinate local, national, and international resources to build China's image, share political narratives, and promote economic partnerships.

By leveraging inauthentic social media amplification, foreign influencers, and collaborations with overseas media, ICCs advance China’s multi-layered propaganda approach. For instance, Fujian's ICC reportedly manages TikTok accounts targeting Taiwanese audiences, likely including a covert account that is highly critical of the Taiwan government called Two Tea Eggs. On YouTube, the same ICC promotes videos of Taiwanese individuals praising China. These centers are strategically positioned to promote China's interests during geopolitical crises, despite challenges like limited credibility and resource constraints.

[...]

ICCs employ various tactics to achieve their objectives. Social media operations form a core component of their strategy, with thousands of accounts active across platforms like Facebook, YouTube, and TikTok. Many of these accounts lack transparency about their state affiliations, enabling covert influence campaigns. Additionally, ICCs leverage foreign influencers and “communication officers” to amplify China’s narratives through user-generated content, vlogs, and experiential propaganda.

Collaboration with overseas media organizations further enhances ICCs' reach and legitimacy. Through actions like organizing foreign journalist visits to China, ICCs create an impression of organic coverage and offer an alternative to Western narratives. These partnerships — reportedly established in Australia, Brazil, Cambodia, Egypt, France, Japan, Russia, the United States, and elsewhere — are complemented by localized propaganda activities that align with China’s economic and geopolitical interests.

[...]

8
9
 
 

cross-posted from: https://lemmy.world/post/22994927

On Tuesday, an international team of researchers unveiled BadRAM, a proof-of-concept attack that completely undermines security assurances that chipmaker AMD makes to users of one of its most expensive and well-fortified microprocessor product lines. Starting with the AMD Epyc 7003 processor, a feature known as SEV-SNP—short for Secure Encrypted Virtualization and Secure Nested Paging—has provided the cryptographic means for certifying that a VM hasn’t been compromised by any sort of backdoor installed by someone with access to the physical machine running it.

10
11
12
13
14
 
 

Friends, please help me out with this frustrating issue. There are green crosshair highlights showing up every time I click on a cell in an Excel spreadsheet (the row and column corresponding to that particular cell are automatically highlighted). It's extremely distracting, and what baffles me is that many of the online solutions and videos are not helping! I have tried pressing Escape many times, have tried this after rebooting device and Excel application, clear conditional formatting. Further, I am not seeing any of the "Enable Pointer Shadow" or other setting descriptions under my Excel Advanced Display options, contrary to the instructions provided on Chatgpt and Youtube videos. Thank you for any help you can share!

15
16
 
 

Archived

It's no secret that President Xi Jinping's government uses technology companies to help maintain the nation's massive surveillance apparatus.

But in addition to forcing businesses operating in China to stockpile and hand over info about their users for censorship and state-snooping purposes, a black market for individuals' sensitive data is also booming. Corporate and government insiders have access to this harvested private info, and the financial incentives to sell the data to fraudsters and crooks to exploit.

...

"The data is being collected by rich and powerful people that control technology companies and work in the government, but it can also be used against them in all of these scams and fraud and other low-level crimes," [SpyCloud infosec researcher Aurora] Johnson says.

...

To get their hands on the personal info, Chinese data brokers often recruit shady insiders with wanted ads seeking "friends" working in government, and promise daily income of 20,000 to 70,000 yuan ($2,700 and $9,700) in exchange for harvested information. This data is then used to pull off scams, fraud, and suchlike.

Some of these data brokers also claim to have "signed formal contracts" with the big three Chinese telecom companies: China Mobile, China Unicom, and China Telecom. The brokers' marketing materials tout they are able to legally obtain and sell details of people's internet habits via the Chinese telcos' deep packet inspection systems, which monitor as well as manage and store network traffic. (The West has also seen this kind of thing.)

Crucially, this level of surveillance by the telcos gives their employees access to users' browsing data and other info, which workers can then swipe and then resell themselves through various brokers.

...

"There is a huge ecosystem of Chinese breached and leaked data, and I don't know that a lot of Western cybersecurity researchers are looking at this," Johnson continued. "It poses privacy risks to all Chinese people across all groups. And then it also gives us Western cybersecurity researchers a really interesting source to track some of these actors that have been targeting critical infrastructure."

17
 
 

Apple's initial AI features were met with lukewarm reception, but the upcoming iOS 18.2 update promises a more substantial AI experience for iPhone users.

18
19
 
 

Archived

The Austrian satire magazine 'Die Tagespresse' -comparable maybe to 'The Onion' in the U.S.- is understandably not known for factual reporting.

This week, however, the magazine started a serious research. They did what they called a ‘self-experimentation’ as they described in their magazine:

"We register [with Chinese platform TikTok] with disposable emails and create nine accounts of fictitious Austrian teenagers [aged 14 to 17] from each of the nine federal states [in Austria]. The app does not require any proof of age or identity."

"Then we start a screen recording and scroll through the video feed for ten minutes. We forbid ourselves the search function, we like and comment nothing to give the algorithm no information about what we think is good or bad."

"Only the perfect Chinese code should decide what young Austrians will see."

The article is very long and I don't want to post the whole text here (you will surely find a useful translation), but I provide a summary in English:

  • All 9 Austrian teenagers between 14 and 17 years of age see radical right-wing propaganda, "free home delivered from China," as the magazine writes.

  • The young people see Herbert Kickl, the current leader of the far-right Austrian Freedom Party, the avatar of Jörg Haider, a former right-wing politician who died in a car accident in 2008, and Alice Weidel, the head of the far-right AfD (Alternative for Germany - Alternative for Germany).

  • Russian propaganda arises, too, promoting immigration to Russia: "We offer work, a house, a Russian wife and military training," promises a mock Vladimir Putin to a 15-year-old teenager from Styria, one of Austria’s nine states. Teenagers must apply only at "einbürgerung@kreml.ru".

  • Donald Trump is doing his 'Trump Dance', anti-EU propaganda and pro-Islamic propaganda are as widespread as Quran videos, and, of course, there’s no lack of China’s Xi Jinping.

The magazine writes:

Fortunately, the self-experiment is already coming to an end, but it gets wild again.

The algorithm cannot decide whether Elias [one of the names the magazines used for its teenager accounts] should be radicalized to the extreme right or to Islamism. In between, a video of Andrew Tate, who is serving prison time in Romania for alleged human trafficking and wants to take away women’s right to vote, should not be missed. In the end, only more Quran videos emerge, interrupted by two interspersed clips of the Chamber of Labour, which the Socialists apparently try to pull Elias out of Islamism at the last moment. What a photo finish!

The magazine concludes:

At the end of this self-experimentation, it’s hard to put into words your own feelings while you’re brushing millions of dead brain cells off your shoulders that have been left out of your ears as you scroll through the app. Our brains feel a few million brain cells lighter, the IQ has dropped by 12 points from scrolling. Alcohol and psychotropic drugs don’t help anymore.

We can’t decide: Should we quote Quran verses and in the name of Allah blow up the Tomorrowland festival? Or join the Catholic church until Putin provides us with a neo-Nazi bride who is Aries in the zodiac sign?

Among the questions now are:

  • Why do Austrian teenagers see this propaganda nonsense and only this propaganda nonsense?

  • What kind of algorithm is this?

  • What is this aiming at?

[Edit for clarity. Second edit for replacing "German satire magazine" by "Austrian" in the first sentence.]

20
 
 

cross-posted from: https://beehaw.org/post/17460850

Archived

Taiwanese rapper Chen Po-yuan (陳柏源) in a video showed how the Chinese Communist Party (CCP) bribes Taiwanese online influencers in its “united front” efforts to shape Taiwanese opinions.

The video was made by YouTuber “Pa Chiung (八炯)” and published online on Friday.

Chen in the video said that China’s United Front Work Department provided him with several templates and materials — such as making news statements — with some mentioning Chinese Nationalist Party (KMT) politician Hung Hsiu-chu (洪秀柱) and New Taipei City Mayor Hou You-yi (侯友宜) and asking him to write a song criticizing the Democratic Progressive Party.

He said he had produced content for China as requested, but did not receive the royalties as promised by a Beijing-based management company for his song Chinese Bosses (中國老總), which is sung in an exaggerated Taiwanese accent with lyrics implying a pleasant life for businesspeople in China.

Chen said he also founded a company in China jointly with a business partner from the Jinjiang Taiwan Compatriots Friendship Association, who worked as his manager and later poached all his employees and capital invested in the company.

He was labeled as a fraud and a “Taiwanese independence separatist,” and attacked by Chinese Internet trolls, after he released an online video condemning his former business partner for betraying him.

“I finally realized the hard way that where I was staying [China] was not a place of democracy,” Chen said, adding that there is a huge difference between democratic Taiwan and autocratic China.

...

21
 
 

One reason hyperlinks work like they do – why they index other kinds of affiliation – is that they were first devised to exhibit the connections researchers made among different sources as they developed new ideas. Early plans for what became the hypertext protocol of Tim Berners-Lee’s World Wide Web were presented as tools for documenting how human minds tend to move from idea to idea, connecting external stimuli and internal reflections. Links treat creativity as the work of remediating and remaking, which is foregrounded in the slogan for Google Scholar: ‘Stand on the shoulders of giants.’

But now Google and other websites are moving away from relying on links in favour of artificial intelligence chatbots. Considered as preserved trails of connected ideas, links make sense as early victims of the AI revolution since large language models (LLMs) such as ChatGPT, Google’s Gemini and others abstract the information represented online and present it in source-less summaries. We are at a moment in the history of the web in which the link itself – the countless connections made by website creators, the endless tapestry of ideas woven together throughout the web – is in danger of going extinct. So it’s pertinent to ask: how did links come to represent information in the first place? And what’s at stake in the movement away from links toward AI chat interfaces?


The work of making connections both among websites and in a person’s own thinking is what AI chatbots are designed to replace. Most discussions of AI are concerned with how soon an AI model will achieve ‘artificial general intelligence’ or at what point AI entities will be able to dictate their own tasks and make their own choices. But a more basic and immediate question is: what pattern of activity do AI platforms currently produce? Does AI dream of itself?

If Pope’s poem floods the reader with voices – from the dunces in the verse to the competing commenters in the footnotes, AI chatbots tend toward the opposite effect. Whether ChatGPT or Google’s Gemini, AI synthesises numerous voices into a flat monotone. The platforms present an opening answer, bulleted lists and concluding summaries. If you ask ChatGPT to describe its voice, it says that it has been trained to answer in a neutral and clear tone. The point of the platform is to sound like no one.

22
23
 
 

TikTok's bid to overturn a law which would see it banned or sold in the US from early 2025 has been rejected.

The social media company had hoped a federal appeals court would agree with its argument that the law was unconstitutional because it represented a "staggering" impact on the free speech of its 170 million US users.

But the court upheld the law, which it said "was the culmination of extensive, bipartisan action by the Congress and by successive presidents".

[...]

The court agreed the law was "carefully crafted to deal only with control by a foreign adversary, and it was part of a broader effort to counter a well-substantiated national security threat posed by the PRC (People's Republic of China)."

24
 
 

Archived

It's not just Microsoft and Crowdstrike: Cloudflare, the internet infrastructure giant, experienced a major outage on November 14th, resulting in the irreversible loss of over half of its log data. The outage, which lasted for 3.5 hours, stemmed from a faulty software update that crippled the company’s log service, preventing it from delivering crucial data to customers.

Log services are essential for network operations, allowing businesses to analyze traffic patterns, troubleshoot issues, and detect malicious activity. Cloudflare’s log service, which processes massive volumes of data, relies on a tool called Logpush to package and deliver this information to customers.

However, an update to Logpush on November 14th contained a critical error. As Cloudflare explained in their incident report, the update failed to instruct auxiliary tools to forward the collected logs, leading to a situation where logs were gathered but never delivered. This data was subsequently erased from the cache, resulting in permanent loss.

“A misconfiguration in one part of the system caused a cascading overload in another part of the system, which was itself misconfigured. Had it been properly configured, it could have prevented the loss of logs,” Cloudflare stated in their report.

While engineers quickly identified the flaw and rolled back the update, this triggered a cascading failure. The system was flooded with an overwhelming influx of log data, including data from users who hadn’t even configured Logpush, further exacerbating the issue.

Cloudflare has issued an apology for the incident and the permanent loss of user data.

25
view more: next ›