Flmaker

joined 2 months ago
[–] Flmaker@lemmy.world 1 points 4 days ago (1 children)

I have been testing it and not sure if I want to move email account It sometimes cannot connect to their server to sync via thunderbird (PC & Android) and I have to try few times on occasions It looks like their server capacity is not as good as the major ones

[–] Flmaker@lemmy.world 1 points 1 week ago (1 children)

I've got a free account to test it for now. I could get another free one if I delete the first one according to the help files. I have a domain name already if I set the email through my domain host to infomaniak, will I be charged for that or having one email would still be free?

[–] Flmaker@lemmy.world 2 points 1 week ago

that's a good point I have already been using a similar one https://duckduckgo.com/email/

[–] Flmaker@lemmy.world 1 points 1 week ago

Thanks for the heads up!

I also reached out to support and got a pretty generic reply: "If your question is different, please reply." So I did, but still no response.

Adding or editing the calendar on Infomaniak through Thunderbird can be hit or miss—sometimes the server is down or just busy. It seems to be working fine now, though.

I already moved my Google Calendar and deleted the calendar on my Google account

I found a bunch of complaints -several pages- from other users about Infomaniak, but then I stopped collecting them. I can share them here if you’re interested!

Most entertaining info : "We never share your personal data with third parties without good reason" https://www.infomaniak.com/en/legal/confidentiality-policy

[–] Flmaker@lemmy.world 1 points 1 week ago (1 children)

Thank you "Nextcloud offers a free account for home users" would be a very good option I have started testing infomaniak at present although nextcloud would also be good choice best

[–] Flmaker@lemmy.world 2 points 1 week ago (3 children)

thank you I did that joined free version of infomaniak works on thunderbird

[–] Flmaker@lemmy.world 1 points 2 weeks ago* (last edited 1 week ago)

Thanks a lot

I have asked: "free replacement for Gmail free Google Calendar alternative that works well on both Android and via Thunderbird Windows "

Recommended by the members here: "have you tried infomaniak? They also have a calendar and an online office suite… and their free plan is nice"

Then I picked "infomaniak"

My latest UPDATE: " just started a free account with infomaniak so far happy with infomaniak except the free email extensions like @ik.me @etik.com or @ikmail.com addresses whether will be accepted / recognized by institutions like banks etc" ___>>

[–] Flmaker@lemmy.world 2 points 2 weeks ago* (last edited 1 week ago)

UPDATE just started a free account with infomaniak so far happy with infomaniak except the free email extensions like @ik.me @etik.com or @ikmail.com addresses whether will be accepted / recognized by institutions like banks etc

[–] Flmaker@lemmy.world 1 points 2 weeks ago* (last edited 1 week ago)

Thank you indeed just started a free account with infomaniak so far happy with infomaniak except the free email extensions like @ik.me @etik.com or @ikmail.com addresses whether will be accepted / recognized by institutions like banks etc

[–] Flmaker@lemmy.world 0 points 2 weeks ago* (last edited 1 week ago)

just started an account with infomaniak so far happy with infomaniak except the free email extensions like @ik.me @etik.com or @ikmail.com addresses whether will be accepted / recognized by institutions like banks etc

[–] Flmaker@lemmy.world 2 points 2 weeks ago* (last edited 1 week ago)

just started an account with infomaniak so far happy with infomaniak except the free email extensions like @ik.me @etik.com or @ikmail.com addresses whether will be accepted / recognized by institutions like banks etc

 

cross-posted from: https://lemmy.world/post/27977693

Hey everyone!

So, I've made some small progress in switching things up on my Android:

Replaced  Gmail app with Thunderbird
Replaced Google Calendar with FOSSify Calendar
Replaced Google Play Store with Aurora & F-Droid
Replaced  Android file manager with FOSSify File Manager
Replaced  Android keyboard with Heliboard

But now I’m hitting a wall trying to find ones on root of problem:
free replacement for Gmail
free Google Calendar alternative that works well
 on both Android and via Thunderbird    Windows 

Self-hosting isn’t really an option for me, so I’d love to hear your suggestions!

If you’ve found something you really like, please share your experiences.

Thanks!


UPDATE

just started a free account with infomaniak so far happy with infomaniak except the free email extensions like @ik.me @etik.com or @ikmail.com addresses whether will be accepted / recognized by institutions like banks etc

also found a community group at https://www.reddit.com/r/Infomaniak/ will search if any group here as well

 

Hey everyone!

So, I've made some small progress in switching things up on my Android:

Replaced  Gmail app with Thunderbird
Replaced Google Calendar with FOSSify Calendar
Replaced Google Play Store with Aurora & F-Droid
Replaced  Android file manager with FOSSify File Manager
Replaced  Android keyboard with Heliboard

But now I’m hitting a wall trying to find  ones on root of problem:
free replacement for Gmail
free Google Calendar alternative that works well
 on both Android and via Thunderbird    Windows 

Self-hosting isn’t really an option for me, so I’d love to hear your suggestions!

If you’ve found something you really like, please share your experiences.

Thanks!


UPDATE

just started a free account with infomaniak so far happy with infomaniak except the free email extensions like @ik.me @etik.com or @ikmail.com addresses whether will be accepted / recognized by institutions like banks etc

also found a community group at https://www.reddit.com/r/Infomaniak/ will search if any group here as well

-1
submitted 3 weeks ago* (last edited 3 weeks ago) by Flmaker@lemmy.world to c/privacy@lemmy.world
 
 

Trusting Open Source: Can We Really Verify the Code Behind the Updates?

In today's fast-paced digital landscape, open-source software has become a cornerstone of innovation and collaboration. However, as the FREQUENCY and COMPLEXITY of UPDATES increase, a pressing question arises: how can users—particularly those without extensive technical expertise—place their trust in the security and integrity of the code?

The premise of open source is that anyone can inspect the code, yet the reality is that very few individuals have the time, resources, or knowledge to conduct a thorough review of every update. This raises significant concerns about the actual vetting processes in place. What specific mechanisms or community practices are established to ensure that each update undergoes rigorous scrutiny? Are there standardized protocols for code review, and how are contributors held accountable for their changes?

Moreover, the sheer scale of many open-source projects complicates the review process. With numerous contributors and rapid iterations, how can we be confident that the review processes are not merely cursory but genuinely comprehensive and transparent? The potential for malicious actors to introduce vulnerabilities or backdoors into the codebase is a real threat that cannot be ignored. What concrete safeguards exist to detect and mitigate such risks before they reach end users?

Furthermore, the burden of verification often falls disproportionately on individual users, many of whom may lack the technical acumen to identify potential security flaws. This raises an essential question: how can the open-source community foster an environment of trust when the responsibility for code verification is placed on those who may not have the expertise to perform it effectively?

In light of these challenges, it is crucial for the open-source community to implement robust mechanisms for accountability, transparency, and user education. This includes fostering a culture of thorough code reviews, encouraging community engagement in the vetting process, and providing accessible resources for users to understand the software they rely on.

Ultimately, as we navigate the complexities of open-source software, we must confront the uncomfortable truth: without a reliable framework for verification, the trust we place in these systems may be misplaced. How can we ensure that the promise of open source is not undermined by the very vulnerabilities it seeks to eliminate?"

 

cross-posted from: https://lemmy.world/post/27344091

  1. Persistent Device Identifiers

My id is (1 digit changed to preserve my privacy):

38400000-8cf0-11bd-b23e-30b96e40000d

Android assigns Advertising IDs, unique identifiers that apps and advertisers use to track users across installations and account changes. Google explicitly states:

“The advertising ID is a unique, user-resettable ID for advertising, provided by Google Play services. It gives users better controls and provides developers with a simple, standard system to continue to monetize their apps.” Source: Google Android Developer Documentation

This ID allows apps to rebuild user profiles even after resets, enabling persistent tracking.

  1. Tracking via Cookies

Android’s web and app environments rely on cookies with unique identifiers. The W3C (web standards body) confirms:

“HTTP cookies are used to identify specific users and improve their web experience by storing session data, authentication, and tracking information.” Source: W3C HTTP State Management Mechanism https://www.w3.org/Protocols/rfc2109/rfc2109

Google’s Privacy Sandbox initiative further admits cookies are used for cross-site tracking:

“Third-party cookies have been a cornerstone of the web for decades… but they can also be used to track users across sites.” Source: Google Privacy Sandbox https://privacysandbox.com/intl/en_us/

  1. Ad-Driven Data Collection

Google’s ad platforms, like AdMob, collect behavioral data to refine targeting. The FTC found in a 2019 settlement:

“YouTube illegally harvested children’s data without parental consent, using it to target ads to minors.” Source: FTC Press Release https://www.ftc.gov/news-events/press-releases/2019/09/google-youtube-will-pay-record-170-million-settlement-over-claims

A 2022 study by Aarhus University confirmed:

“87% of Android apps share data with third parties.” Source: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies https://dl.acm.org/doi/10.1145/3534593

  1. Device Fingerprinting

Android permits fingerprinting by allowing apps to access device metadata. The Electronic Frontier Foundation (EFF) warns:

“Even when users reset their Advertising ID, fingerprinting techniques combine static device attributes (e.g., OS version, hardware specs) to re-identify them.” Source: EFF Technical Analysis https://www.eff.org/deeplinks/2021/03/googles-floc-terrible-idea

  1. Hardware-Level Tracking

Google’s Titan M security chip, embedded in Pixel devices, operates independently of software controls. Researchers at Technische Universität Berlin noted:

“Hardware-level components like Titan M can execute processes that users cannot audit or disable, raising concerns about opaque data collection.” Source: TU Berlin Research Paper https://arxiv.org/abs/2105.14442

Regarding Titan M: Lots of its rsearch is being taken down. Very few are remaining online. This is one of them available today.

"In this paper, we provided the first study of the Titan M chip, recently introduced by Google in its Pixel smartphones. Despite being a key element in the security of these devices, no research is available on the subject and very little information is publicly available. We approached the target from different perspectives: we statically reverse-engineered the firmware, we audited the available libraries on the Android repositories, and we dynamically examined its memory layout by exploiting a known vulnerability. Then, we used the knowledge obtained through our study to design and implement a structure-aware black-box fuzzer, mutating valid Protobuf messages to automatically test the firmware. Leveraging our fuzzer, we identified several known vulnerabilities in a recent version of the firmware. Moreover, we discovered a 0-day vulnerability, which we responsibly disclosed to the vendor."

Ref: https://conand.me/publications/melotti-titanm-2021.pdf

  1. Notification Overload

A 2021 UC Berkeley study found:

“Android apps send 45% more notifications than iOS apps, often prioritizing engagement over utility. Notifications act as a ‘hook’ to drive app usage and data collection.” Source: Proceedings of the ACM on Human-Computer Interaction https://dl.acm.org/doi/10.1145/3411764.3445589

How can this be used nefariously?

Let's say you are a person who believes in Truth and who searches all over the net for truth. You find some things which are true. You post it somewhere. And you are taken down. You accept it since this is ONLY one time.

But, this is where YOU ARE WRONG.

THEY can easily know your IDs - specifically your advertising ID, or else one of the above. They send this to Google to know which all EMAIL accounts are associated with these IDs. With 99.9% accuracy, AI can know the correct Email because your EMAIL and ID would have SIMULTANEOUSLY logged into Google thousands of times in the past.

Then they can CENSOR you ACROSS the internet - YouTube, Reddit, etc. - because they know your ID. Even if you change your mobile, they still have other IDs like your email, etc. You can't remove all of them. This is how they can use this for CENSORING. (They will shadow ban you, you wont know this.)

 
  1. Persistent Device Identifiers

My id is (1 digit changed to preserve my privacy):

38400000-8cf0-11bd-b23e-30b96e40000d

Android assigns Advertising IDs, unique identifiers that apps and advertisers use to track users across installations and account changes. Google explicitly states:

“The advertising ID is a unique, user-resettable ID for advertising, provided by Google Play services. It gives users better controls and provides developers with a simple, standard system to continue to monetize their apps.” Source: Google Android Developer Documentation

This ID allows apps to rebuild user profiles even after resets, enabling persistent tracking.

  1. Tracking via Cookies

Android’s web and app environments rely on cookies with unique identifiers. The W3C (web standards body) confirms:

“HTTP cookies are used to identify specific users and improve their web experience by storing session data, authentication, and tracking information.” Source: W3C HTTP State Management Mechanism https://www.w3.org/Protocols/rfc2109/rfc2109

Google’s Privacy Sandbox initiative further admits cookies are used for cross-site tracking:

“Third-party cookies have been a cornerstone of the web for decades… but they can also be used to track users across sites.” Source: Google Privacy Sandbox https://privacysandbox.com/intl/en_us/

  1. Ad-Driven Data Collection

Google’s ad platforms, like AdMob, collect behavioral data to refine targeting. The FTC found in a 2019 settlement:

“YouTube illegally harvested children’s data without parental consent, using it to target ads to minors.” Source: FTC Press Release https://www.ftc.gov/news-events/press-releases/2019/09/google-youtube-will-pay-record-170-million-settlement-over-claims

A 2022 study by Aarhus University confirmed:

“87% of Android apps share data with third parties.” Source: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies https://dl.acm.org/doi/10.1145/3534593

  1. Device Fingerprinting

Android permits fingerprinting by allowing apps to access device metadata. The Electronic Frontier Foundation (EFF) warns:

“Even when users reset their Advertising ID, fingerprinting techniques combine static device attributes (e.g., OS version, hardware specs) to re-identify them.” Source: EFF Technical Analysis https://www.eff.org/deeplinks/2021/03/googles-floc-terrible-idea

  1. Hardware-Level Tracking

Google’s Titan M security chip, embedded in Pixel devices, operates independently of software controls. Researchers at Technische Universität Berlin noted:

“Hardware-level components like Titan M can execute processes that users cannot audit or disable, raising concerns about opaque data collection.” Source: TU Berlin Research Paper https://arxiv.org/abs/2105.14442

Regarding Titan M: Lots of its rsearch is being taken down. Very few are remaining online. This is one of them available today.

"In this paper, we provided the first study of the Titan M chip, recently introduced by Google in its Pixel smartphones. Despite being a key element in the security of these devices, no research is available on the subject and very little information is publicly available. We approached the target from different perspectives: we statically reverse-engineered the firmware, we audited the available libraries on the Android repositories, and we dynamically examined its memory layout by exploiting a known vulnerability. Then, we used the knowledge obtained through our study to design and implement a structure-aware black-box fuzzer, mutating valid Protobuf messages to automatically test the firmware. Leveraging our fuzzer, we identified several known vulnerabilities in a recent version of the firmware. Moreover, we discovered a 0-day vulnerability, which we responsibly disclosed to the vendor."

Ref: https://conand.me/publications/melotti-titanm-2021.pdf

  1. Notification Overload

A 2021 UC Berkeley study found:

“Android apps send 45% more notifications than iOS apps, often prioritizing engagement over utility. Notifications act as a ‘hook’ to drive app usage and data collection.” Source: Proceedings of the ACM on Human-Computer Interaction https://dl.acm.org/doi/10.1145/3411764.3445589

How can this be used nefariously?

Let's say you are a person who believes in Truth and who searches all over the net for truth. You find some things which are true. You post it somewhere. And you are taken down. You accept it since this is ONLY one time.

But, this is where YOU ARE WRONG.

THEY can easily know your IDs - specifically your advertising ID, or else one of the above. They send this to Google to know which all EMAIL accounts are associated with these IDs. With 99.9% accuracy, AI can know the correct Email because your EMAIL and ID would have SIMULTANEOUSLY logged into Google thousands of times in the past.

Then they can CENSOR you ACROSS the internet - YouTube, Reddit, etc. - because they know your ID. Even if you change your mobile, they still have other IDs like your email, etc. You can't remove all of them. This is how they can use this for CENSORING. (They will shadow ban you, you wont know this.)

 

Need Your Suggestions: RSS Reader for Windows PC

I have been happy with a podcast player's feed reader on my Android for some time,

but I am about to give up because its screen size makes it difficult to read long articles and need an app for windows PC (getting the full text then let me read them offline)

I would appreciate your guidance on the best recommended RSS readers for Windows PC that are:

-Visually good app  for a Windows Laptop
-Able to get the feeds with full text then let me read them offline
 

Need Your Suggestions: RSS Reader for Windows PC

I have been happy with a podcast player's feed reader on my Android for some time,

but I am about to give up because its screen size makes it difficult to read long articles and need an app for windows PC (getting the full text then let me read them offline)

I would appreciate your guidance on the best recommended RSS readers for Windows PC that are:

Visually good app  for a Windows Laptop

Able to get the feeds with full text then let me read them offline
 

Join this tactical, practical, and heretical discussion between Meredith Whittaker, President of Signal and leading advocate for secure communication, and Guy Kawasaki, host of the Remarkable People podcast

 

Appreciate your help please

 

FBI Warns iPhone, Android Users—We Want ‘Lawful Access’ To All Your Encrypted Data By Zak Doffman, Contributor. Zak Doffman writes about security, surveillance and privacy. Feb 24, 2025

The furor after Apple removed full iCloud security for U.K. users may feel a long way from American users this weekend. But it’s not — far from it. What has just shocked the U.K. is exactly what the FBI told me it also wants in the U.S. “Lawful access” to any encrypted user data. The bureau’s quiet warning was confirmed just a few weeks ago.

The U.K. news cannot be seen in isolation and follows years of battling between big tech and governments over warranted, legal access to encrypted messages and content to fuel investigations into serious crimes such as terrorism and child abuse.

As I reported in 2020, “it is looking ever more likely that proponents of end-to-end security, the likes of Facebook and Apple, will lose their campaign to maintain user security as a priority.” It has taken five years, but here we now are.

The last few weeks may have seemed to signal a unique fork in the road between the U.S. and its primary Five Eyes ally, the U.K. But it isn’t. In December, the FBI and CISA warned Americans to stop sending texts and use encrypted platforms instead. And now the U.K. has forced open iCloud to by threatening to mandate a backdoor. But the devil’s in the detail — and we’re fast approaching a dangerous pivot.

While CISA — America’s cyber defense agency — appears to advocate for fully secure messaging platforms, such as Signal, the FBI’s view appears to be different. When December’s encryption warnings hit in the wake of Salt Typhoon, the bureau told me while it wants to see encrypted messaging, it wants that encryption to be “responsible.”

What that means in practice, the FBI said, is that while “law enforcement supports strong, responsibly managed encryption, this encryption should be designed to protect people’s privacy and also managed so U.S. tech companies can provide readable content in response to a lawful court order.” That’s what has just happened in the U.K. Apple’s iCloud remains encrypted, but Apple holds the keys and can facilitate “readable content in response to a lawful court order.”

There are three primary providers of end-to-end encrypted messaging in the U.S. and U.K. Apple, Google and Meta. The U.K. has just pushed Apple to compromise iMessage. And it is more than likely that “secret” discussions are also ongoing with the other two. It makes no sense to single out Apple, as that would simply push bad actors to other platforms, which will happen anyway, as is obvious to any security professional.

In doing this, the U.K. has changed the art of the possible, bringing new optionality to security agencies across the world. And it has done this against the backdrop of that U.S. push for responsible encryption and Europe’s push for “chat control.” The U.K has suddenly given America’s security agencies a precedent to do the same.

“The FBI and our partners often can’t obtain digital evidence, which makes it even harder for us to stop the bad guys,” warned former director Christopher Wray, in comments the bureau directed me towards. “The reality is we have an entirely unfettered space that’s completely beyond fully lawful access — a place where child predators, terrorists, and spies can conceal their communications and operate with impunity — and we’ve got to find a way to deal with that problem.”

The U.K. has just found that way. It was first, but unless a public backlash sees Apple’s move reversed, it will not be last. In December, the FBI’s “responsible encryption” caveat was lost in the noise of Salt Typhoon, but it shouldn’t be lost now. The tech world can act shocked and dispirited at the U.K. news, but it has been coming for years. While the legalities are different in the U.S., the targeted outcome would be the same.

Ironically, because the U.S. and U.K. share intelligence information, some American lawmakers have petitioned the Trump administration to threaten the U.K. with sanctions unless it backtracks on the Apple encryption mandate. But that’s a political view not a security view. It’s more likely this will go the other way now. As EFF has warned, the U.K. news is an “emergency warning for us all,” and that’s exactly right.

“The public should not have to choose between safe data and safe communities, we should be able to have both — and we can have both,” Wray said. “Collecting the stuff — the evidence — is getting harder, because so much of that evidence now lives in the digital realm. Terrorists, hackers, child predators, and more are taking advantage of end-to-end encryption to conceal their communications and illegal activities from us.”

The FBI’s formal position is that it is “a strong advocate for the wide and consistent use of responsibly managed encryption — encryption that providers can decrypt and provide to law enforcement when served with a legal order.”

The challenge is that while the bureau says it “does not want encryption to be weakened or compromised so that it can be defeated by malicious actors,” it does want “providers who manage encrypted data to be able to decrypt that data and provide it to law enforcement only in response to U.S. legal process.”

That’s exactly the argument the U.K. has just run.

Somewhat cynically, the media backlash that Apple’s move has triggered is likely to have an impact, and right now it seems more likely we will see a reversal of some sort of Apple’s move, rather than more of the same. The UK government is now exposed as the only western democracy compromising the security for tens of millions of its citizens.

Per The Daily Telegraph, “the [UK] Home Office has increasingly found itself at odds with Apple, which has made privacy and security major parts of its marketing. In 2023, the company suggested that it would prefer to shut down services such as iMessage and FaceTime in Britain than weaken their protections. It later accused the Government of seeking powers to 'secretly veto’ security features.”

But now this quiet battle is front page news around the world. The UK either needs to dig in and ignore the negative response to Apple’s forced move, or enable a compromise in the background that recognizes the interests of the many.

As The Telegraph points out, the U.S. will likely be the deciding factor in what happens next. “The Trump administration is yet to comment. But [Tim] Cook, who met the president on Thursday, will be urging him to intervene,” and perhaps more interestingly, “Elon Musk, a close adviser to Trump, criticised the UK on Friday, claiming in a post on X that the same thing would have happened in America if last November’s presidential election had ended differently.”

Former UK cybersecurity chief Ciaran Martin thinks the same. “If there’s no momentum in the U.S. political elite and US society to take on big tech over encryption, which there isn’t right now, it seems highly unlikely in the current climate that they’re going to stand for another country, however friendly, doing it.”

Meanwhile the security industry continues to rally en masse against the change.

“Apple’s decision,” an ExpressVPN spokesperson told me, “is deeply concerning. By removing end-to-end encryption from iCloud, Apple is stripping away its UK customers’ privacy protections. This will have serious consequences for Brits — making their personal data more vulnerable to cyberattacks, data breaches, and identity theft.”

It seems inconceivable the UK will force all encrypted platforms to remove that security wrap, absent which the current move becomes pointless. The reality is that the end-to-end encryption ship has sailed. It has becomne ubiquitous. New measures need to be found that will rely on metadata — already provided — instead of content.

Given the FBI’s stated position, what the Trump administration does in response to the UK is critical. Conceivably, the U.S. could use this as an opportunity to revisit its own encryption debate. That was certainly on the cards under a Trump administration pre Salt Typhoon. But the furor triggered by Apple now makes that unlikely. However the original secret/not secret news leaked, it has changed the dynamic completely.

 

by Lars Wilderang, 2025-02-11

Translation from the Swedish Origin

In a new instruction for fully encrypted applications, the Swedish Armed Forces have introduced a mandatory requirement that the Signal app be used for messages and calls with counterparts both within and outside the Armed Forces, provided they also use Signal.

The instruction FM2025-61:1, specifies that Signal should be used to defend against interception of calls and messages via the telephone network and to make phone number spoofing more difficult.

It states, among other things:

“The intelligence threat to the Armed Forces is high, and interception of phone calls and messages is a known tactic used by hostile actors. […] Use a fully encrypted application for all calls and messages to counterparts both within and outside the Armed Forces who are capable of using such an application. Designated application: The Armed Forces use Signal as the fully encrypted application.”

The choice of Signal is also justified:

“The main reason for selecting Signal is that the application has widespread use among government agencies, industry, partners, allies, and other societal actors. Contributing factors include that Signal has undergone several independent external security reviews, with significant findings addressed. The security of Signal is therefore assumed to be sufficient to complicate the interception of calls and messages.

Signal is free and open-source software, which means no investments or licensing costs for the Armed Forces.”

Signal supports both audio and video calls, group chats, direct messages, and group calls, as well as a simple, event-based social media feature.

The app is available for iPhone, iPad, Android, and at least desktop operating systems like MacOS, Windows, and Linux.

Since Signal can be used for phone calls, the instruction is essentially an order for the Armed Forces to stop using regular telephony and instead make calls via the Signal app whenever possible (e.g., not to various companies and agencies that don’t have Signal), and no SMS or other inferior messaging services should be used.

Note that classified security-protected information should not be sent via Signal; this is about regular communication, including confidential data that is not classified as security-sensitive, as stated in the instruction. The same applies to files.

The instruction is a public document and not classified.

Signal is already used by many government agencies, including the Government Offices of Sweden and the Ministry for Foreign Affairs. However, the EU, through the so-called Chat Control (2.0), aims to ban the app, and the Swedish government is also mulling a potential ban, even though the Armed Forces now consider Signal a requirement for all phone calls and direct messaging where possible.

Furthermore, it should be noted that all individuals, including family and relationships, should already use Signal for all phone-to-phone communication to ensure privacy, security, verified, and authentic communication. For example, spoofing a phone number is trivial, particularly for foreign powers with a state-run telecom operator, which can, with just a few clicks, reroute all mobile calls to your phone through a foreign country’s network or even to a phone under the control of a foreign intelligence service. There is zero security in how a phone call is routed or identified via caller ID. For instance, if a foreign power knows the phone number of the Swedish Chief of Defence’s mobile, all calls to that number could be rerouted through a Russian telecom operator. This cannot happen via Signal, which cannot be intercepted.

Signal is, by the way, blocked in a number of countries with questionable views on democracy, such as Qatar (Doha), which can be discovered when trying to change flights there. This might serve as a wake-

https://cornucopia.se/2025/02/forsvarsmakten-infor-krav-pa-signal-for-samtal-och-meddelanden/

view more: next ›