Asia Trip 2013: Follow the adventures of a crisis mapper in Asia!

I owe an apology to my readers: I did not disappeared or stopped blogging, I just changed platform for a little while. No worries, this is still my official blog, but for the time being you can see what I am up to here. I am using this platform because it is simpler and easier, it allows me to post from my phone and not necessarily original content. As to say: different methodology, different content, different technology.

Screen Shot 2013-08-21 at 4.18.16 PM

I have started a long trip in Asia, visiting 6 countries in one month to work on social media, local technology communities, crisis mapping, local context with regard to communication with communities, media and much more. For the trip, I have set up a Tumblr, so I am using it to write about my trip, my discoveries, and interesting projects or people I am meeting.

I will be back blogging here once I am back into a more of a normal life, but for the time being, please refer to my Asia Trip 2013 Tumblr for more information about what I am up to ūüôā

And, as always, contact me if you have any tips, suggestions or comments ūüôā

Crisis Mapping and Cybersecurity – Part III: security is knowledge

In the discussion we had at ICCM on crisis mapping security, we discussed about what are the scenarios where we can see the issue of security arising for a crisis mapping project.

According to me those are 4:

  1. The case of a repressive regime, where the people managing the project are either activists or related to activists
  2. The case of a repressive regime where the people managing the project are not activists or are so called “improvised activists”
  3. The case of a humanitarian emergency where there are security concerns related to either the presence of militias or a repressive regime
  4. The case of a humanitarian emergency in general, where security is very much linked to the delivery of humanitarian aid and the do not harm principle (which indeed should inform also all the other cases).

Case 1 – repressive regime and activism

This was the example I talked about in my previous blog post. In this case the security issues that arise are very much not linked to the protection of the people managing the project, since they normally know the risks and are willing to take them. As much as we are sure they are informed about all the possibilities, it is ultimately their call to decide what to do and how. There is indeed a very important issue to be faced here: when the activists involved other people in the project, what is the knowledge that is shared with those others about the possible risks.?. The example that can be made here is Tahrir Square: the Egyptians that organized the first demonstrations were activists, a lot of them with a history or arrests, tortures and so on. But after a while a lot of ” common citizens” joined the demonstrations:¬† what was their knowledge of the risks? How much of an informed decision was the one they took?

All in all I think that there are 2 important things to keep in mind when approaching a case like this one:

  1. Activists normally make informed decisions and know the risks much better than we do. We have no right to decide for them if something is worth it or not. I come from a family where my father spent 5 years in jail to fight against a repressive regime; I would never dare to think that he did because he did not know, indeed he did because he did know and he decide to accept the risk.
  2. The crowd, if we want to call it, may be getting into the process not knowing what the risks are. There is no way for us to prevent this apart from spreading as much as possible the knowledge about cybersecurity. And with spreading I mean produce documentation, use simple language, have software companies and online networks do education and informing people about what is that they are using and what the vulnerabilities are. Information is here more important than food and water.

Case 2: repressive regime and “improvised activists”

I have worked on a case like this some time ago, where the people involved in the project wanted to do a crisis mapping deployment under a very repressive regime and they had 0 or little knowledge of the environment they were acting under. Since we were providing support from abroad, we had to use our knowledge to inform them. All in all the big lesson learned here was that our knowledge of the situation was not enough, and the risk for them was too high. We got under incredible stress, they got very scared and the deployment was closed. The risks that all of us and them run into was really high and we realized that there was no way for us to understand better the situation since we were not there, and for them to learn in such a short time frame without risking to be killed, tortured or worst. In those cases my take away is that BEFoRe you get the knowledge and after that u deploy. There is not such a. Thing as a learning as u do in those cases, because the risks are too high.

Case 3: repressive regime/militias and humanitarian emergency

This was the case of our deployment in Pakistan and Libya. This is a very complicated situation since we are talking about several actors, with several degrees of risks associated with each factor, and different possible outcomes depending on the actor, the beneficiary and the issue. I still think it is very complicated to draw lessons from those kind of situations since it really depends on the cases. In addition to this, the issue here is very much linked to the concept of open data and privacy and how you do provide useful information to both humanitarians and affected communities while making sure that you do not endanger them and respect the do not harm principle.

Those type of deployments are the one that will have to be extremely carefully evaluated, using local or trusted networks, doing a careful risk assessment for each actor involved and making sure that links and connections with key actors are in place. My 2 cents on those type of deployments are the following:

  1. Treat different actors indifferent ways: not all information is sensitive or useful for everyone, so create different channels, protect them accordingly and deliver different information to different people
  2. No information does not mean no risks. Not knowing can be as deadly as to give the wrong information to the wrong person, so let’s now panic, but instead find ways to make sure the information flows are built in a way that allows vital information to get to the right people
  3. Do a very careful assessment of what information people in the ground – being humanitarians, local population or the bad people – have or do not have already, what their information channels are and how they use it. People rely on what they know to gather and get information out, and if you know they channels, you know their possibilities.

Case 4: humanitarian emergency and the do not harm principle

In a recent working group in Geneva, a representatives of ICRC did a very good presentation about the DO NOT HARM principle and how we could apply it to crisis mapping. I think that this is a great starting point – learn from who is mastering it – and I gave a lot of thought to it lately.

In the SBTF for example we have already designed our code of conduct on the base of the ICRC code of conduct, but the issue he goes more inept into the actual implementation of the framework when it comes to applying to the do not harm principle. In this regard the SBTF has already started a discussion about how to use this better and u will soon see some results of those discussions in our blog. The main important thing here is that the DO NOT HARM principle is and should be always the main thing to keep in mind when doing a crisis mapping deployment, especially if there is communication with disaster affect communities involved.

On the other side I am intrigued by how can we make sure we always act under this framework when lots of times we know we do not know. The real risk here is that, since we do not really know all the actual implications of all the crisis mapping deployments, since this field is still growing and developing, how do we make sure that we balance the DNH principle with the urgency to do something, and with the actual benefit of a crisis mapping deployment? The more I think about it, the more it looks to me like a cat eating its own tail: should we not doing anything because the risks to harm are too high, or should we try, knowing that the more we try the more we risk, but also that the more we try the more we learn?

In those kind of situation there are also the so called secondary effects to take into accounts. In fact, while there are risks associated with publishing reports from people on the ground for example, or in making certain information publicly available, there are also other risks that may be associated with those factors that we do not take into consideration. One example may be the fact that, if the crisis mapping deployment is available on line, a repressive regime may be tempted to block the Internet, and in this case also endangering a lot of other situations/ humanitarian operations that need the Internet to work effectively. Another example can also be that, if the crisis mapping deployment is collecting information via. SMS, or social network, the groups in the populations that do not have access to those means may be cut out of the system, and their problems or needs may be completely missed or underestimated because they are not able to express them via those means. Secondary effects can be multiple and various, and it is extremely difficult to understand when and where they are taking place and what to do to avoid them.

In conclusion: I am sorry if readers did not find very good answers in this blog post. The intention is indeed not to give answers but to continue talking about the issue, hoping that a constructive debate could lead to some interesting discussions on real solutions. As final point, I would like to highlight that there is no advantage in the endless battle in between Muggles and Crowdsourceres on the security issue if this battle is only framed as a black and white battle.

The issue of security is there and will always be. Practice and constructive debate on the practical implementation of cybersecurity measures is according to me the only way to face this debate. We can’t go back, we cannot prevent people from using crisismapping in repressive regime environments or in humanitarian crisis. But we can inform them, we can share lessons learned and make as open as possible our failures and our knowledge. Free open source knowledge about security is the best weapon we have to avoid others, and ourselves, making the same mistakes and endanger others in those situation. I am happy to do that, so if you want to do a crisis mapping deployment in one of those situation, feel free to shoot me an email. I may not have all the answers, but I will be happy to share what I have learned ..for free. ūüôā

Crisis Mapping and Cybersecurity – Part II: Risk Assessment

This blog expresses only my personal views, and not the one of any organization or institution I have worked or currently work for.

I have a background in human rights and humanitarian affairs, and in those fields you do something that I realized was not that common in the ICT world – or maybe it is just under reported – that is called risk assessment. How does a risk assessment look like?

There are several components to the matrix: there is the risk, the source (sometimes), the likelihood, the mitigation tool/measure and (sometimes) the independent variables. I truly believe that this matrix can help in understanding what are the things that we should focus our attention on and what are the things that we cannot change or we should just ignore. The very key factor in the use of this matrix though does not lie in the matrix, but in whom is filling it.

Here is a couple of examples of simple matrix for risk assessments:

This is exactly the same matrix that we used for the U-Shahid project in Egypt and while I was the one that proposed to use it, I didn’t fill it: the people that fill it where the Egyptian activists that had a very deep knowledge or all those factors, due to their experience. If I would have fill it, the outcome would have been very different, since my ideas on the possible risks associated with this project were very different.

Here an example of how we used this matrix.

  1. First step was to identify the source of risks: in the Egyptian case the source was very easy to identify since it was only one, the Egyptian government and it’s national security. We also identifies an additional source of threat that could have been the Muslim Brotherhood, but since they came to our training to learn how to use the system, we decided that they were not going to be a bit threat: after all they have been discriminated against several times and they are not allowed to participate in the election, so we realized that they also had an interest in the project.
  2. Second step was to list all the possible risks and to associate them with a degree of likelihood from 1 to 10 and design mitigation systems. We came out with the following matrix:

A. Hardware:

  Get the computers where the Flsms software was hosted: DL= 8

We set up what I call the FLSMS mobile system. Here mobile stands for “that moves” and not for mobile phones. Basically we realized that the likelihood of those computers to get caught by the national security was to find them when they were getting online to send data to the Ushahidi system. For this reason we decided that all the people managing the FLSMS system had NoT to do that from their home, but instead from an Internet point. But since an Internet point can be found over the course of 12 hours (election day) we decided that the team responsible for using this system was going to move from Internet point to Internet point every 1 hour / 1 hour an a half. In addition to this, the messages were sent to the Ushahidi platform once in all every time the person managing the software was moving to another Internet point.

The second problem, related to the fact that the sim card could have been indent infield thanks to the IMEI number, the sim cards for the system were bought by the organization and registered all under the same name (yes that was a risk that the organization was taking, but they decided that it was better to have the risk on the organization than on individuals).

  Get to the server: DL= 5

The server was hosted abroad and accessed remotely. In addition to that, several copies of the databases were done and  distributed in different other servers. The main server had an automatic backup done every hour and it was encrypted.

B. System:

  Block the SMS number (in and out): DL = 9

This was one of the risks we knew we could do less. We had a public number that we had to advertise since the project was based on crowd sourcing methodology and the number was registered, as it was obligatory in the country. We decided to have other 5 numbers available and already working, that were registered as personal numbers of some of the less known people participating in the project (but swapped in between them). Those numbers were divided like this: one was used by the monitors, one by the NGOs involved in the project, one was used by the known network of the people that the organizers knew and that were also reporting (and they were sharing it with their trusted network). The other 2 were backups numbers.

  Block the website: DL= 9

We created several mirror websites, and we bought under several names all the similar domains that we could use to replace the main one.

  Infiltrate the platform: DL= 9

The high likelihood of this variable is due to the fact that we knew that the government was easily able to arrest the organizers and torture them to get the password to the system. For this reason we decided that it was not worth it to try to get any super hardcore security system, also because this could have meant for people to be killed if not able to access the system for the repressive regime security people. Some decided that the main thing for us was not to prevent them to access the system, but to make sure that if they did they could not destroy the information contained in there or get to the identity of the people working on it. So what we did was to create a system where we could monitor what each recorded person was doing inside the platform and allowed only the editorial board to be able to delete informations or change settings.

The only 2 people that had access to the database containing all the details of the SMS coming for example were 1 tech person inside the country and 1 tech person outside. All the back ups where handle by the person outside the country and the tech person inside the country had no access to it – this information, the fact that the tech guy was not able to access info from inside the country, was shared broadly on channels that we knew were controlled by the national security.

  Falsify informations DL= 10

We realized that there was little we could do prevent this. Some decided to ignore this issue by relying on the fact that the numbers would have played in our favor. In fact several tentatives to send in false informations were done and always detected. In addition to this, we had a very strong verification system that was verifying information one by one and was only flagging as verified information that were supported by several independent known sources, or by multimedia that we’re undoubtedly showing what was reported. In addition to this we were encouraging people to use the SMS alert system of the Ushahidi platform of that if something was being reported in their area, they could go and verify it.

C. People:

   Arrest all the participants: DL= 5

To try to avoid this possibility we wanted to keep the identity of the people working on the project hidden. Unfortunately this was not possible, since the national security pretended to have the list of all the people working on this project. Since any measure to prevent them from arresting the participants was completely unless we decided to do 2 things: the first one was that all participants were well aware about the fact that their contact information were in the hand of the ns. The second one was to ask to all of them to move as much as possible during the election day, to avoid an easy identification of their location. The third one was to create an arrest protocol (see below).

   Arrest of the activists managing the projects (editorial board): DL= 9

This was the most likely hung to happen, as all the activists had been already arrested before and where all well known by the NS for this project. To them we applied the same arrest protocol. In addition to this we set up an external team, based in another country. In the case all the participants were arrested, the entire system could be taken over and managed by a team of people, trained in the previous months, and that was unreadable by the national security of the country. In this way, the information could still come in from the country, but the processing was “outsourced” to a foreign team (key for this was the trust already present in between the two groups).

   Close the organization managing the project DL= 5

For this eventuality we had already set up a chain of international organizations (human rights watchdogs) that could at least use their international power to put pressure on the gov’t in case of the closure of the organization. In addition to this, the organization keeps constant contact with the national security and responded to all their inquiries about the project, including giving them all the information requested (sometimes written in such a way that was impossible to understand what we were doing for example).

   Intimidate the participants to the project DL= 10

This was something that was already happening during the design phase of the project. To avoid bad things to happen, we were always sure that the organizers – especially the less known once, and so the most vulnerable – were never alone and always in busy areas when outside.

   Intimidate the people sending in informations DL= 7

This eventuality was agin something that we could not avoid easily. For this reason, we were making very clear, even in the advertisement of the project was we the possibilities, how the government could reach out to people and how it could trace them. In addition to this we did training for free to people on how to use social media, mobile and Internet security and to do video and pictures with their phones without being caught.

In addition to this we had an arrest protocol in place. The arrest protocol was design by asking to the people that had been arrested before to describe exactly how the arrests happened. The main thing for us was to let everyone else know if someone was arrested, for two reasons: to allow action to be taken immediately, like call a lawyer, and to allow the rest of the team to take actions in order to avoid to be arrested or to stop working on the project.

The phases that we identified for the arrests were:

  1. Police arrive.
  2. If person to be arrested in the house possibly ring the bell or open the door directly
  3. If person to be arrested outside simply get the person
  4. In both cases ask/take their mobile phone and computer

On this premises we realized that our chance to get the information out, especially if the person was arrested while alone, so with no witnesses, was to allow for them to send an SMS out. The way we did that was really simple: we ask everyone to set up a direct SMS already written in their phone linked to the keyboard ( something as simple as to set up a button in the phone that automatically bring you to the message already written). The time necessary to send the SMS out was as short as two clicks on the same button: one to get to the SMS already written, one to send it to 10 predefined numbers. Simple as it is.

This deployment was particular for several reasons. The first one was that we knew that we could not prevent the gov’ t from doing certain things, like arrest us, or get into the system, so instead of trying to prevent them from doing it, we try to mitigate the effects.

The second one was that the people involved were activists, so people that were taking a certain amount of risk, knowing it, and we’re ok with that since they were willing to risk to achieve their goals.

For those 2 reasons our security protocols were focusing more on mitigation measures for the EFFECTS and not on preventing the act from actually happen.

In addition to this, we knew were well that there was no way we could control or mitigate all the risks, so for those ones, we decided to create system where the act was at least going to be known by others, as to allow other measures to be taken.

Crisis Mapping and Cybersecurity – Part I: Key points

This blog expresses only my personal views, and not the one of any organization or institution I have worked or currently work for.

At his opening speech at ICCM Patrick Meier has listed a number of topics that have been and will be very important in the field of crisis mapping. One of them was security.

It is sure that security is and will be in the years to come one of the major topic to be addressed in this field, but yet, I feel there is the need of a more pragmatic approach to this topic than the one used so far.

In one of the sessions at ICCM, which was entirely focused on this subject, some very interesting issues came out, which will give me some more ground to explain why I think that almost 90% of the discussions on cyber-security tend to take a tangent in the direction an academic – philosophical approach rather that a practical solution to practical problems.

1. One of the major discussions on security gravitate around the design or protocols, standards and code of conducts to try to crystallized the problem in predefined codes and in this way find predefined solutions. While I understand the need for some sort of universal documentation that would allow us to look at the problem of security in a much clearer way, I think that starting with this as the first step is an upside down approach that will not really help that much.

What I have learned in my experience in working with repressive regime and in dealing with security issues in crisis mapping projects is that everything is entirely related to the background of the place where you are implementing your project. In this regard I fear that if we design protocols and procedures before and then try to “customize” them to the specific case, we will end up missing a lot of the local specificity that can make something that is very safe in a situation super dangerous in another.

2. A second discussion has been focusing on the tools and the responsibility on who design, build and sell/make available those tools to the public. I have already discussed this in one of my previous blog post, but I will reiterate here what I think is the main problem in focusing on the tools instead of the uses, and I will explain it with an example. A knife is a tool that we always have, in every household. We all know a knife can be used to kill, or hurt people, but we also know that this isn’t the only use you can make of it. Now, while when we buy a knife the vendor will not tell us to be careful because we can also kill someone with it, since he assumes we know it, the situation is different when it comes to cyber/ digital tools.

The main problem here is the level of collective knowledge that we have about the risks we run into when using a certain platform. When we buy a phone, the vendor will not tell us that our phone can always be recognized and traced, that there is a unique identification number and signal associated with it, and that there are several ways that someone can use to access all the information in our phone. The same happen when we open an email account.

While there are certain information/tools that we know are accessible/ hackable, the knowledge about risks associated with a lot of tools is still not that widespread. I truly believe that the conversation about who has the responsibility to spread this knowledge is indeed useless: the responsibility according to me is shared in bw the users and the consumers, and we should all work towards more knowledge spread more broadly. Ultimately it is not about which tool is better than the other, it is about knowing exactly what vulnerabilities associated with each tool are and how to make this knowledge as accessible as possible.

3. The third big discussion is about what to do when risks are too many, or knowledge too poor or when solutions have not been designed yet. In this regard I am a big fan of the “if you don’t know what you are doing, do not anything” principle, but I also truly believe that we cannot think that inaction is the best solution for all security issues we face when doing a crisis mapping project. If there are security concerns, they need to be addressed carefully and responsibly, but in urgent situations – like a crisis – there is no time for prolonged conversations on what to do. Action needs to be taken, and better be a good one. So, what do to?

My 2 cents on this are the following:

1. Stop talking about who should do what and focus on what need to be done now. If u are interested in the topic and realized it is important, do it yourself. I am much less interested in the attribution of responsibilities then the actual lowering of the negative outcomes. With this I am not saying that there are no responsibilities, but that I would prefer to act in advance on the issue then to wait for fact to happen and then call someone guilty.

2. Go local! I will never be enough tired to say that local population normally have a much better knowledge of the risks and the dangers. Talk to them: they may not realized how to use a tool, but they will be bale to tell you how local actors will take advantages or not of certain possibilities, if presented to them.

3. Focus on what u can do and mitigate, since if you cannot do anything, there is no point in wasting your time trying to find a solution to it. To do this, you should not focus just on the cause of certain threats, but on their consequences: you may not be able to make a government weaker, or less repressive, but you can look at the practical consequences of their repression and mitigate it.

4. Dissect your problems and your security concerns: when facing a security issue, dissect it into all different and possible phases, components and possible outcome and look at all of them as if their were single factors. You may not be able to solve entirely the problem, but you may be able to act on single components and in this way lower the overall impact of the security threats.

Now, I know this is easier to say than to do, and there is no “how to do” guide on this, but we have to start from something no?

In my next post I will make a practical example to explain those suggestions.

Internet Governance Forum: real time open data vs security and privacy

I was invited to speak to a panel in the Internet Governance Forum on the 27th Р30th of September by AccessNow, on Privacy and security in an open/realtime/linked data world.

The goal of this workshop was to discuss open, realtime, and linked data generated, gathered, and organized online, which are proving vital to understanding local communities and the world we live in, ensuring more informed decisions are made at all levels of society. While online data is proving immensely useful, the dramatically increasing trend towards moving data online — whether knowingly, carelessly, or without consent — has led to unprecedented challenges to user privacy and security. At this juncture, Internet Governance is needed to clarify and codify the rights and responsibilities of various actors as regards online data.

The workshop featured short presentations from representatives of civil society, government, academia, and corporations, to facilitate discussion about theses issues amongst the panelists, the audience, and international remote participants, including members of Access’ network (now in 184 countries).

Topics for discussion included:
‚ÄĘ How open/realtime/linked online data can aid development
‚ÄĘ The use of crowd-sourced, geolocation, and mobile data
‚ÄĘ Existing and emerging privacy and security threats of and to online data and ways to mitigate these risks
‚ÄĘ How various stakeholders can assist the public in protecting their data and rights online
‚ÄĘ Maintaining the balance between privacy, security, inclusivity, transparency, and accountability in legislation, regulation, and terms of service.

I was invited to speak as Innovation Media Advisor for the Africa Region for Internews Network on the use of real time data and the risks associated with that. In my talk I decided to use as example the project we are funding in Ghana, which is implemented by EPAWA – Enslavement Prevention Alliance for West Africa in collaboration with Survivors Connect. Both organizations work on human trafficking, and while EPAWA is a 4-years-old organization working in Ghana with civil society, governamental organizations and agencies and media, Survivors Connect has been working on this in Nepal and Haiti before, and it works as technical implementer for the project.

The pilot project is in fact a sort of experimentation of the use of mobile technology to support the creation of a local network of local monitors, civil society groups and governamental agencies to track the movements of children and women from the rural areas to the capital, and the case of domestic violences inside the communities themselves. The network will exchange real time information via mobile technology and with the support of a password protected Ushahidi platform.

I think this is a good example of the use of real time data but it also highlights some of the main issues I think can come out in other projects. This is the reason why I used this project as example.

The following are my main points of conversation at the workshop.

TECH IS NOT THE SOLUTION TO EVERYTHING РESPECIALLY TO SECURITY 

My point here is that when working with real time data related to sensitive issues, like for example human trafficking, the main key factor to secure data does not rely in the technical security measures, being it encryption or other means, but it lies in the social network, and I am not referring to social online networks, but to social Рreal people Рnetworks. I have notices many time in  my work that the safety of the information exchanged in any networks does rely heavily on the ability to create trusted networks on the ground that are able to secure information because of their deep knowledge of risks, dangers and sources of potential security threats. Those social networks are the ones that can still work when the technology is not there and are the true base of a secure system.

YOU CAN BUILD THE BEST TECHNICAL SYSTEM WHEN YOU START BY THINKING WHAT WILL HAPPEN WHEN TECHNOLOGY IS NOT THERE

Apart from the issue of security, what I think it is extremely important, especially if you work in Africa, is to be able to design information systems that always have a PLAN B. If you system does not have any way to work without electricity, or without internet, or without a phone, then you are building something that most likely is extremely vulnerable and that can be blocked by something as simple as a storm. Technology is always supposed to make things easier and faster, but if technology is the only criteria for the functioning of your system, then it is a limit and not a facilitator.

EDUCATION TO SECURITY MEASURES, THREATS AND VULNERABILITIES IS KEY

Another interesting thing that I notice when I was working in highly unsafe environments like Sudan and Egypt (under Mubarak regime), is that a lot of people underestimate or do not know at all the risks and the vulnerabilities of their real time information systems. Especially in those two cases, where I was working directly with activists, which were well aware of the potential risks of someone hacking or tracing their information, the level of awareness of the actual vulnerabilities of their systems was very low. If we go to less specilaized groups and especially into the world of small NGOs, the ignorance of the issue is even bigger. In this regard I have to say that 2 factors are the underlying casues of this situation:

1) Language. Cyber-security information are still written and explain in a way that it is too complicated and technical for a normal audience. If a small NGO, that does not¬†necessarily¬†have a cyber-security expert in its team, wants to find out information about how to protect their data, how to secure their servers and their emails and so on, most of times gets stopped by the complication and difficulties in understanding a language that it is not familiar with and instructions that will require too much espertise to be followed. (a very well done “Practical Guide to Protecting Your Identity and Security Online” edited by Access Now¬†is available here)

2) Awareness. Too often software companies are not explaining in an open way what are the vulnerabilities of their systems, and too often technical equipment is sold without people having a real understanding of how this equipment really works. ¬†We are seeing this with mobile phones: the majority of “normal people”, meaning not expert or part of the cyber-security world, do not know that their mobile phone is always¬†traceable, do not know that their SIM card is¬†traceable, do not know what an IP address is and what information it carries and so on. ¬†The same thing is to be said about people using software without fully understanding what are its vulnerabilities.

WHERE DOES PRIVACY END AND OPEN DATA START?

One of the main challenges that I found when working with real time information systems is finding the limit in between Open Data and privacy and security. Let’s take again our Ghana project. The system built will be exchanging information related to children and women, to trafficking, abuses, and violence. For obvious reasons, a lot of the information exchanged cannot be public and needs to be handle in a very careful way. On the other side, if available publicly this information can be extremely useful and can lead to more preparedness and awareness of the problems faced by the communities on the ground, if not to more prompt response in urgent issues. Of course, there are ways by which this information can be filtered and made available, but even in this case, the more you “open”, the more you are¬†increasing the possible risks and vulnerabilities of your system. ¬†This tension is always there when dealing with open data and real time information systems, and it needs to be carefully dealt with on a case by case level.