arrow-circle arrow-down-basicarrow-down arrow-left-small arrow-left arrow-right-small arrow-right arrow-up arrow closefacebooklinkedinsearch twittervideo-icon

Global Crises, Local Impacts: Threats to Social Cohesion and How Cities Can Respond (July 2024)

— 18 minutes reading time

On 24 July 2024, the Strong Cities Network hosted the eighth in a series of monthly webinars on Global Crises, Local Impacts: Threats to Social Cohesion and How Cities Can Respond. Successive global crises – from COVID-19 and migration to the war in Ukraine, climate change and, most immediately, the Israel-Gaza crisis – have had impacts on social cohesion in cities around the world, including across Europe and North America. Convened under the auspices of the Strong Cities Transatlantic Dialogue Initiative, this session focused on the ways cities are impacted by online harms – including disinformation, extremism and targeted attacks – and what local actors can do to respond and protect their cities.

The webinar featured a briefing by Katherine Keneally, Director of Threat Analysis and Prevention, Institute for Strategic Dialogue (ISD), and Kelsey Bjornsgaard, Director of Practice, Strong Cities Network, on the threats facing cities from online harms and an overview of impacts and approaches in Canada, including Bill C-63, by Yusuf Siraj, co-founder of Foundation for a Path Forward (F4PF). Dr Ade Chinedu Adesina, Anti-Racism Specialist for the City of Ottawa, and Lisa de Haan, Senior Policy Manager for the City of Amsterdam, expanded on how online threats affect their cities and the approaches they are taking to address them.

Eric Rosand, Executive Director, Strong Cities Network, briefed participants on the origins of Strong Cities’ Global Crises, Local Impacts Initiative and how the monthly webinar series was launched in response to requests from cities to provide support and peer learning as they try to navigate the local impacts of global crises. Many cities had expressed concern about the ways these crises were playing out online, and the consequences online threats were having on local leaders and the residents they represented.  

Key Takeaways

  1. Online harms carry serious offline consequences in cities. Online hate speech, extremism and disinformation have been linked with offline manifestations of hate and violence across a diversity of cities in Europe and North America. The online and offline spheres form a single ecosystem in which ideas and incidents in one inspire and give shape to the other. 
  2. There is a gendered element to online hate and abuse, and women of colour are most at risk. Online threats, harassment, trolling and cyberbullying disproportionately target and impact minority communities and women of colour are consistently most at risk. This trend can be seen across Europe and North America in the targeting of local and other female politicians with serious consequences for female representation and political influence, as it impacts many women’s decisions to run for office or pursue contentious issues. 
  3. Cities benefit from a comprehensive approach to addressing online harms. Whether the strategy is developed nationally or locally, it needs to provide a cohesive response that mobilises a wide range of actors, including local governments and elected local leaders, to pursue and prevent a myriad of threats. 
  4. Advancing technology offers cities numerous opportunities and poses a range of challenges. In such an environment, scaling existing digital literacy and digital citizenship training for different actors – including youth and older residents – will be crucial to maintaining digital resilience. 
  5. Strong Cities and ISD offer a range of training and support that can strengthen a city’s capacity to identify and address online harms. This includes programmes in digital literacy and citizenship for residents of all ages and training for local officials on strategic communications, threat identification and analysis and digital technologies. ISD also conducts social media monitoring to identify online harms and can provide bespoke briefings and support to develop flagging, reporting and support systems for local officials and other constituencies targeted online by hate and abuse, as well as information and cyber security systems, fit for city purpose. 

Key Themes

Disinformation damages trust in governments, including at a local level, and disenfranchises voters; threats and harassment are curtailing candidates’ willingness to stand for (re)election and pursue contentious issues; and online vitriol is driving polarisation. Presenters and the panel discussed how a hostile online environment is impacting local politics and threatening democracy in Europe and North America.

Disinformation is false or misleading information or behaviour that is spread on or offline with the intent to deceive for economic, military or political gain, and which may cause public harm. When false or misleading information is spread unintentionally, it is misinformation.

Disinformation and misinformation have become a persistent threat to elections at all levels and are frequently wielded to undermine a government’s ability to govern. During elections, mis/disinformation, including conspiracy narratives, can obscure voters’ understanding of key issues, undermine their faith in key institutions and can impair their ability to vote. Katherine Keneally shared findings from ISD’s research on election mis/disinformation, noting that it is not only widespread, but widely consumed and believed. For example, in the 2020 US election cycle, mis/disinformation about the election got six times more clicks on Facebook than factual news. In Canada, Yusuf Siraj shared that 56% of Internet users encounter “suspicious news” multiple times a month and 15% are more likely to believe spurious news over mainstream media.    

Mis/disinformation impacts all voters, but it does not necessarily affect them all equally. Reflecting on the impacts of mis/disinformation in Ottawa, Dr Ade Adesina noted that it is often disenfranchising for marginalised communities in particular who are disproportionately targeted and affected. In addition to affecting their decision-making and ability to vote or participate, it also hinders the local government’s ability to serve these communities. For example, when disinformation misrepresents the city’s activities or intentions it undermines trust and wastes city resources by undercutting the effectiveness of their efforts. This depresses engagement with marginalised groups and can inhibit their full civic participation.

Mis and disinformation are often racialized and disproportionately affect minoritiy groups, denying communities equitable participation.

Dr Ade Adesina, Anti-Racism Specialist, City of Ottawa

Another form of online harm impacting democracy comes from the threats and harassment targeting candidates, election workers and voters. In Canada, Yusuf Siraj explained, harassment and intimidation are driving high numbers of municipal elected officials to resign or forgo a bid for re-election. Lisa de Haan noted that 75% of Council Members in Amsterdam receive threats or hate speech online and Mayor Halsema is frequently targeted, particularly for being a woman.

Katherine shared other instances where this is happening. For example, she pointed to a recent National League of Cities survey which found that 79% of the 112 US local officials surveyed reported regular harassment and threats on social media, while a Princeton study of 2,9000 US local officials found that over half had been insulted, a third had been harassed and one in six had received threats. In Europe, a study commissioned by the European Parliament found that physical violence and online threats targeting local officials have become a widespread phenomenon across the European Union. Katherine also noted that, in the UK, more than 3,000 offensive Tweets are sent to MPs every day and online threats and abuse have become in England, Scotland and Wales, especially for female politicians. Meanwhile, surveys in Germany show that half of mayors have received hate online, leading to concerns about personal safety and impacting their willingness to continue their work.

Threats are communications that express the intent to harm another.

Harassment includes ongoing behaviour designed to intimidate an individual.

Speakers shared how these threats, harassment and abuse take a toll. As noted, they impact a politician’s willingness to pursue certain issues or even continue in their role or run for re-election. And this abuse – and the consequences – are more extreme for women, and especially women of colour. Katherine Keneally highlighted some findings from ISD’s monitoring of the US 2020 and 2022 elections, noting that women candidates faced considerably more abuse than their male counterparts. For example, in 2020, Democratic female candidates received 10 times the abuse of their male counterparts and Republican female candidates received twice as much as their male counterparts. This gendered abuse, it was noted, poses a serious barrier to equal representation as it aims to force women – and women from minority backgrounds in particular – out of politics.

Speakers shared how politicians and political candidates are not the only ones being targeted. Poll and other election workers in the US have become a common target for abuse and threats, leading city and state governments to struggle to fill key positions. Voters are also targeted, intimidated to either vote a certain way or abstain from voting all together and, as Lisa de Haan pointed out, writers, journalists and scientists are also being targeted.

While the webinar was focused specifically on online harms, presenters and speakers emphasised that online hate and extremism does not occur in a vacuum and often manifests offline in consequential ways. Likewise, offline incidents shape online hate and extremism, creating an inter-connected ecosystem, rather than parallel spheres. Katherine Keneally spoke about the role that online spaces play in supporting extremist groups and ideologies through radicalisation, recruitment, intimidation and mobilisation. In Germany, for example, a surge in online hate and extremism has been connected with a spike in offline violence, including the killing of local politician Walter Lübcke (Christian Democrats) by a right-wing extremist after persistent online hate against Lübcke’s support for refugees.

How we train professionals and how we incorporate the whole online dimension in our training material… is that there’s no such thing as online and offline. It’s just one big blur.

Lisa de Haan, Senior Policy Manager, City of Amsterdam

Dr Ade Adesina described a similar situation in Ottawa where he sees a link between racially motivated abuse and hate online and increased violence and discrimination offline. Because of this, he cautioned, it is important to identify times when racially marginalized groups may be at the greatest risk, such as during cultural days of significance or following a serious incident globally. During these times, Dr. Ade’s office is more vigilant online and may turn off comment sections on municipal social media accounts, for example, to reduce the chance of trolls taking control of the discussion and raising tensions that could affect offline events.

Cyberbullying isrepeated aggressive and threatening behaviour targeting a victim online.

Trolling includesposting inflammatory and off-topic comments online to provoke and cause distress.

These online harms can manifest in a myriad of real-world consequences for the city and for individuals. In addition to the violence described above, it drives polarisation and discrimination that can heighten tensions between groups and damage social cohesion. It also has serious personal consequences for individual well-being, both mental and physical. Katherine Keneally explained how cyberbullying and trolling can cause lasting harm, especially for vulnerable groups, including minorities, women and members of the LGBTQ+ community, who are often disproportionately targeted and impacted. Youth tend to be particularly vulnerable to psychological harm, exacerbated by content recommendation algorithms. She added that ISD analysis has demonstrated how YouTube’s algorithm exposes young users to inappropriate and potentially harmful content. Young people’s vulnerability is a particular concern for the City of Amsterdam, where, Lisa de Haan explained, the local government has prioritised young people in their outreach to address and mitigate online harms.

Dr Ade Adesina cautioned that online trolling and mis/disinformation can have economic consequences for individuals from marginalised groups. Through his work as an anti-racism specialist in Ottawa’s government, he has seen that when a group – or even an individual – is consistently portrayed in a negative light online, they can have a harder time finding meaningful employment. This results in further marginalising already marginalised groups.

Yusuf Siraj, co-founder of Foundation for a Path Forward (F4PF), described Canada’s comprehensive new Online Harms Act, Bill C-63, which was introduced on 26 February 2024. He said that, as a country, Canada is ‘extremely online’ with an Internet penetration of 94% and an average daily use time of six hours. This exposes some 37 million people to online harms, and as Yusuf noted, Canadians are regularly exposed to hate speech, harassment and disinformation. The Bill introduces a new legislative and regulatory framework and changes to the Criminal Code and Canadian Human Rights Act to provide new legal protections for users and penalties for those who abuse them. It also establishes a Digital Safety Commission as a centralised regulatory body, a Digital Safety Ombudsperson to advocate for the public and a Digital Safety Office of Canada to enable bureaucratic management.

Lisa de Haan contrasted this with the approach in The Netherlands and her city of Amsterdam. She described a series of effective practices and methods for identifying problematic content and protecting vulnerable users proactively from a wide range of online harms. In this approach, ‘online harms’ has been included as a focus area within a range of specialised departments – for example, she looks at it through an extremism lens in her office, while a colleague would consider the implications for sexual trafficking in another office. Each pursues their own specialised focus, and the different offices convene regularly: “we come together once every two months to discuss whatever is happening online, and to see whether we can learn from each other. It’s a small thing, but we do keep each other on board with what’s happening. I think these kind of small steps could be also the way to go, [operating] from your own expertise and your own skill set.” This method enables a specialised approach that maps online harms onto the existing structures without a major legislative overhaul. But, as Lisa explained, it also has its limitations. Their prevention and safeguarding in this area “is mostly focused on individuals”; it lacks the comprehensive “encompassing vision” that Canada’s Bill C-63 offers that could bind all these efforts together and can limit strategic planning and law enforcement responses where they are needed.

Speakers discussed how government moves slowly, but technology does not. On one hand, rapid advances in the digital sphere have provided a range of opportunities for cities to manage administration more efficiently and engage their residents more effectively. For example, Dr Ade Adesina described Ottawa’s online system through which residents can report hate and racially-motivated incidents. On the other hand, technology has also left cities vulnerable to new and rapidly evolving threats. As described above, extremists and other malevolent actors are exploiting the online sphere to radicalise, recruit, manipulate and mislead. Malign actors are increasingly targeting cities with different forms of cyberattacks.

Katherine Keneally described some of the ways these attacks have manifested, including Russian and Iranian-backed ‘hacktivist groups’ targeting critical infrastructure to disrupt key services and ‘ransomware’ attacks in which data is stolen, encrypted or both at a high cost to the city. In both cases, she said, cities are frequent targets and exhibit some of the greatest vulnerability stemming from outdated systems, insufficient security and, crucially, under-trained staff. She also explained some of the ways in which Artificial Intelligence (AI) has enhanced malicious content, including hate speech, extremist materials and disinformation. AI is enabling the development and distribution of fabricated content at scale, creating false impressions of networked support, circumventing moderation and facilitating greater personalisation.

Similarly, through its Espace Egalité, the City of Strasbourg uses role-playing and simulations to broach the topic of hate and discrimination with children as young as six. The Espace serves as an education centre on discrimination, teaching visitors the 20+ characteristics that are considered protected in French law, the impacts of discrimination and the steps victims and witnesses of discrimination can take to seek justice. It also humanises the experiences of migrants and refugees by taking children through the typical journey of an asylum seeker, while also teaching children to think critically through games and puzzles that seek to raise awareness about (unconscious) biases.

Artificial Intelligence is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. (OECD 2023)

It is not all bad news, however. Speakers shared how, while AI has supercharged disinformation and extremist content, it has also boosted monitoring capabilities and responses. In addition to supporting greater engagement with residents through online spaces, it was noted that AI can be used to address online harms by supporting automated identification and moderation of extremist content, sentiment analysis and proactive interventions, real-time alerts of potential extremist activities and enhancing counter/alternative narrative campaigns. The key, it was emphasised, is expanding knowledge and skills among government employees and other actors to identify and respond to the threats from technology and utilise it to maximise it opportunities. This will require enhanced digital training.

Kelsey Bjornsgaard presented on two forms of digital training that are key to digital resilience among government stakeholders and the general public: one related to digital literacy and the other to digital citizenship Digital Literacy is the ability to obtain, apply and transfer knowledge using digital media across digital platforms. Digital Citizenship is the ability to engage positively, critically and competently in the digital environment, drawing on the skills of effective communication and creation, to practice forms of social participation that are respectful of human rights and dignity through the responsible use of technology. Kelsey shared some examples of ISD and Strong Cities programmes that have been training adults and youth in digital citizenship and digital literacy. Organised in partnership with German companies, Business Council for Democracy (BC4D) teaches employees about the spread of hate speech, targeted disinformation and conspiracy theories and what they can do to counter online harms and help protect those around them.

To build young people’s resilience to online harms, ISD’s Be Internet Citizens – delivered in partnership with YouTube –works with teenagers aged 13+, to bolster their resilience to a range of online harms including hate and disinformation, while empowering them to become well-informed and engaged citizens in the digital era. Strong Cities has focused on empowering young people to be leaders in promoting social change, including in meeting the challenge of online harms through its Young Cities programme – a capacity-building programme for young activists and local government to enhance youth leadership in addressing social challenges at a local level.

Lisa de Haan emphasised the importance of providing digital training for a range of city stakeholders to support a comprehensive approach to addressing online harms. She described the inclusive approach they are taking in Amsterdam, in which professionals across the city are trained to recognise and respond to a range of online threats. This includes police, youth workers, case managers, school workers and more to make sure the people working offline are aware of the threats happening online and the impacts they can have. The training takes different forms, depending on the target group, and prepares workers to recognise not only individual pieces of content, but also broad narratives and different extremist ideologies. “We have incorporated the whole online dimension within our own awareness programs. And it’s like opening a door – it’s probably the first thing that you need to do. But it does take some time to get it right.”

This webinar was the eighth in a series of monthly webinars for mayors, city representatives and research organisations for timely discussion and exchanges of approaches around Global Crises, Local Impacts. The next session – Navigating Tensions between Migrants, Displaced Persons & Host Communities: City-Led Strategies for Addressing Flashpoints & Promoting Social Cohesion – is scheduled for 25 September 2024.

Strong Cities Resources:

Other Resources

For more information on this event, the webinar series, or Strong Cities Transatlantic Dialogue or Global Crises, Local Impacts Initiatives, please contact Allison Curtis, Deputy Executive Director, at [email protected].