By Dylan Hill
It has been reported that Telegram played a role in enabling the mass mobilisation of over a million protestors in Hong Kong who took to the streets last month to voice their opposition to the extradition bill. As a result, Telegram has experienced DDoS attacks hampering the service, and security experts have questioned the strength of Telegram (For instance, Telegram group chats are not encrypted).
While cybersecurity experts tend to focus on state-level sophisticated attacks such as these DDoS attacks and other methods of controlling the Internet (which are legitimate concerns), a far more significant security threat occurred during the Hong Kong protests. Hong Kong activist Ivan Ip was arrested for being the administrator of a 20,000-person Telegram group and was forced by the authorities to provide his phone’s access information. As a result, some of the 20,000 people participating in a peaceful protest are now potentially identifiable to authorities (if they chose to display their phone numbers in the app). This creates a data breach that has the potential to be exponentially more damaging, and long-lasting for the implicated activists, than any of the DDoS attacks Telegram now faces.
Will these 20,000 activists be able to freely organize in the future or are they now targeted? Will their participation in these digital activities be the foundation for digital cases built against them as Hong Kong becomes more restrictive? Will most of the members of this group determine they are no longer safe to participate in such activities? Let us all hope these potential outcomes do not become reality. The peaceful protest by people in Hong Kong has been deeply inspiring and generates hope for many people.
Note: The use of Telegram is problematic for a couple of security reasons, and in a companion post, see recommendations for an alternate and more effective means of securing your communications in Hong Kong. Signal is a better option.
The Security Issue is a System Architecture Problem
Some digital security experts are asking why activists in Hong Kong would choose to use Telegram (a fairly legitimate question since the chat app has had its security efficacy regularly questioned by experts).
However, this view obscures the fact that digital security naivete by Ip is not the reason for the data breach. The real dangers that will result from the security breach that Ip experienced is due to Telegram forcing its users to use phone numbers as their account ID. This system design is not unique to Telegram. Activists seeking to use various “secure” chat tools are regularly forced to use the exact same account registration approach used by Telegram — even with its known built-in security vulnerability. This includes the popular chat apps Signal and Whatsapp. Several human rights technologists, and users of the apps, have spoken about the problem with this system design. Consider, for example, Jillian York’s critique of this approach two years ago. And requests for this feature in Signal go back at least as far as 2014. The engineers of these apps cannot feign ignorance.
It is also important to highlight that the threat Ip faced is not unusual. This tactic is one of the cheapest, and most readily employed attacks for activists and journalists in various regions around the globe. China’s state-level technical expertise is not needed to demand passwords or by-pass shorter screen locks, and for vulnerable persons and populations, or those in regions with weak protection of the right to privacy, refusing to allow authorities access to your phone is simply not a legitimate option.
This means that the vulnerability that Ip faced with his use of Telegram is an engineering flaw, even if does not fall under commonly understood software vulnerabilities. Telegram and other “secure” chat apps, such as WhatsApp and Signal, have chosen a system design that ignores one of the largest security threats faced by those most in need of secure apps and even enforces the vulnerability.
A Recurring Security Problem
Engineers are often quick to suggest that if people understood the technical risks, they would be able to make strong decisions and avoid such situations in the digital space. This mindset is problematic for a number of reasons. First, it places the responsibility for this particular security breach on someone like Ip — who in this case had little-to-no choice for secure alternatives — instead of where it belongs, which is with the engineers who chose to use (and maintain) a verified phone number as account ID. First, consider just some of the known options available for secure chat: Signal, WhatsApp, and Telegram all require phone number verification. Wire representatives have stated they are not ready for state-level attacks from China (even going as far as to ask an Asia-focused NGO to encourage Chinese citizens to not use the app). Line has become too commercial and their encryption was never fully vetted. The code base for Chat Secure has recently begun a move to an architecture that uses a Matrix backend, whose servers recently faced a significant hack. The list goes on. What choice did Ip really have?
Also consider, that while training activists, journalists and human rights workers absolutely can have substantial safety benefits, it doesn’t solve the underlying problem for this particular security breach. In addition, training is costly, time-consuming (organizing such events takes time for raising funds, building curriculum, establishing safe and secure protocols and logistics, etc.), and these events can be a security risk in-and-of-themselves. And just how do you rapidly scale this approach to 20,000 people? How do you rapidly scale it to the over 1,000,000 people that protested?
Human rights technologists and some tech-savvy individuals have found strategies to work around this phone number requirement to enable users to mask their identity when using these apps. However, some of the people most in need of such apps don’t have access to the digital resources required to achieve these workarounds (e.g., burner phones and burner SIMs cannot be purchased, or a person has limited financial means, etc.). Even in regions not as restrictive as China (in which real name requirements make it virtually impossible to have a phone number unassociated with your identity), it is common to legally require identification to be able to obtain a phone number. And the workarounds required to mask your identity, when using these apps, have become increasingly harder and harder to implement.
Doesn’t it make more sense to fix the engineered security flaw?
This is a serious question. Doesn’t it make more sense to just fix this engineered security flaw?
I am fairly confident these software organizations did not intentionally design their systems to incorporate such a significant security flaw. It was likely these system design choices were made to make it easy to adopt the use of these tools, and to easily connect with those you want to chat with. It is understandable and even valuable to make these tools easy to use. The increased use of these apps is a good thing.
Also, making the type of architectural change needed to fix the flaw on an active system with a large userbase is not simple or cheap. However, with an incident such as Ip’s, it is not possible to ignore the security implications. And the justifications given by the engineers of these apps become harder and harder to accept.
Telegram has more than one significant security flaw (e.g., consider that groups chats are not end-to-end encrypted), and so is not entirely clear that the phone number requirement is the first issue it needs to address if it wishes to continue to frame itself as a secure option. Facebook with its immense budget could afford to make the required changes for Whatsapp. However, with an overly-deferential adherence to a fiduciary responsibility to its shareholders, Facebook is unlikely to make the necessary changes (and this is certainly not the worst case in which Facebook has ignored the ethical responsibilities and outcomes of the systems’ they own).
Which leaves us to consider Signal, widely considered one of the more secure communication options available. Limited funding has been a fairly plausible excuse in the past for Open Whisper System (the creators of Signal). However, this is not the case any longer, given the $50 million financial infusion the company received last year. Open Whisper System clearly takes the writing of secure code seriously, and it does appear to consider maintaining security as part of their ethical responsibility. Hopefully, an incident like this is a wakeup call. Adding the ability to use a pseudonym instead of a phone number, in their software, would be a truly excellent outcome from this unfortunate incident in Hong Kong. I am tentatively hopeful.
I am also hopeful for another takeaway from this incident: that a new respect emerges within the digital security field for the unsexy but critical work that often needs to be done to make things truly secure for some of the most at-risk people. Some of the biggest security enhancements that can be made to digital security tools do not require the latest and greatest technologies. Some of the services most needed by high-risk defenders already exist in some form but they require modifications to remove or overcome security barriers. And more often than not, immense feats of engineering are not required for some truly substantial security gains.
What is needed is an engineering process that is human-centered. The process requires funders of human rights technology to ensure that on-the-ground human rights organizations are in charge of products’ roadmap instead of engineering shops. It requires a commitment to boring housekeeping items such as running and maintaining servers (even if it is not fun), and sustaining the baseline safety requirements that defenders have. It requires engineers to learn to listen. And even more important, it requires engineers to respond to security flaws when they are shown that they exist in their system.
Clarification 10/7: A previous version of this piece suggested that the police may have access to the phone number of 20,000 individuals in a Telegram group. However, Telegram confirms that only users who have chosen to share their phone numbers in the app, or those already listed in a phone’s contact list, would be vulnerable to police obtaining their number.