The Black Bear Post

What's New

One of the first blockchain technologies seeing real-life use

A new blockchain company called Coinfirm has announced partnership with one of Poland’s major banks – PKO BP. Coinfirm has developed a blockchain-based verification tool called Trudatum. The idea behind is to use blockchain’s permanently and immovably saved data. This news is quite big, as blockchain implications were estimated to have a wide variety of use but are actually only now starting to see real-world use (besides cryptocurrencies, of course).

Each single transaction will be represented by a permanent abbreviation or hash signed by the bank’s private key. Every client then can verify whether documents received by a partner or the bank itself are true or if a third party attempted any alterations. Coinfirm was founded by Pawel Kuskowski, Pawel Aleksander and Maciej Ziolkowski who already had some experience in dealing with cryptocurrency.

In Bulgaria, for example, banks are required by law to keep a paper trail on every single transaction that is done in the last several years. Trudatum is a digital solution, which creates “durable media” which can be permanently stored. It is not far-fetched to say that we expect banks to turn to blockchain very soon – mostly because it will save then efforts and additional costs in the long run. As for now, PKO BP is one of the first European banks to officially use blockchain technology for document administration. PKO BP’s top management is currently quite happy with the implemented technology and banking tests were “highly satisfying”, meaning the shift towards blockchain is imminent.

Image source

Experimenting with supercomputers in outer space

On the 14th of August this year, Hewlett Packard Enterprise sent a supercomputer to the International Space Station (ISS) using SpaceX’s resupplying trip. The supercomputer is a part of an experiment in order to see how such a sophisticated machine behaves in outer space for a year without any modifications – the relative time it might take us to reach Mars.

Dr. Frank Fernandez, Hewlett Packard’s head payload engineer, explained that computers deployed at the ISS are always modified. Unfortunately, computers that are fit to work in space go through extensive improvements and are usually several generations behind those we use today. This means complex tasks are still done on Earth and transmitted back to the ISS. This model is working for now, but the longer the distance the more time it would take transmissions from Earth to reach their destination.

Taking into account the complexity of a trip to and a landing on Mars, it is crucial to assure instant and advanced computational power – not one that is 5 or even 2 years old. The idea of this experiment is to see whether there will be deviations in processor speed and memory refresh rate and make adjustments wherever necessary in order for results to be optimal.

Dr. Fernandez said that error detection and correction can be done through built-in hardware when it’s given enough time. When exploring space, however, time is sometimes a luxury. The main idea is to have the computing tools on board of the spacecraft. The more time is saved on computations, the more different experimentation tasks can take place in outer space. Another benefit is less bandwidth usage on the network between Earth and ISS, since a lot of it is used to transmit computational data. If the processing power is in the station itself, this additional bandwidth can be used for a different purpose.

In the end, the goal is to find the optimal technology we currently have to aid our efforts in space exploration.


Image source

How is freedom of expression threatened by hate speech and the role of social media

Nowadays almost everyone is on social networks. With so many people providing and sharing content it is no surprise that a small portion of it consists of hate speech, extremism and sexual matter. On Twitter there is still targeted harassment and trolling, Facebook distances itself from fake news, but also does nothing to prevent them – it is a rather twisted understanding of freedom of speech. Nowadays politicians are undertaking a course to alter all of this.

First in this endeavor was Germany with its government proposing a €50m fine for social media companies whenever failing to delete abuse, slander, fake news or hate speech within 24 hours of its initial publishing. Interestingly, Europe is taking the driving seat in regulating social media – industry where all the big players are American.

While in the US freedom of speech is indefeasible, European law recognizes hate speech due to historical circumstances. German interior minister Heiko Maas recently stated that too little of inappropriate content in social media is deleted and even when this happens, it Is done rather slowly. Reporting practices are slow and ineffective. Germany is pressuring social media companies to battle fake news more rigorously. Strict sanctions will compel these organizations to delete apparent criminal content across the whole platform within 24 hours of its initial publishing. Other inappropriate data is to be removed within a week. Social networks are to publish reports on received complaints and how they were taken care of by their complaints team. When they do not meet the set requirements hefty fines are to be expected. All disputes are to be settled in German court in order to avoid differences in EU Members’ laws.

Maas mentioned in a speech that obvious lies are also protected under freedom of speech, but “freedom of expression ends where criminal law begins”. His argument is that social media companies are not simply mediums for information exchange, but responsible for what is conveyed through them.

According to different independent organizations, social networks delete between 40 and 80% of all illegal content in the first 24 hours, whereas German government’s initial target set will be at 70%.

German authorities are planning an EU-wide legislation, targeting abusive and criminal content in social networks and it will certainly be something we look forward to. The European Commission also informed Facebook, Google and Twitter to alter their terms and conditions in order to reflect EU consumer rules.

From a slightly different perspective, Germany might be worried the effect fake news could have on its federal elections this year given their correlation with the success of Trump’s campaign. There is also a large number of Refugees in the Federal Republic, being the object of increasing hate speech occurrences. The number of policy makers in the EU and US that want to see social network companies block harmful content is increasing, which will likely turn into a global initiative. On the negative side, this would give governments control over social media content. What constitutes hate speech exactly is also yet to be defined in criminal law in the United States.

One thing is certain – social media companies CAN battle fake news since they can do target marketing on us. The question is – will they do it? After all, it’s not nearly as profitable. Algorithms can distinguish hate speech content, but such a solution will make social network platforms slower, which in turn will hinder user traffic and ultimately revenue. Perhaps such an automated resolution can be implemented in the final stages of maturity and decline of product life cycles, which Facebook and Twitter have not yet reached.

On the one hand, requiring of social networks to self-manage inappropriate content will most certainly lead to “delete first ask later” policy in order to avoid serious fines. On the other hand, if western governments look over hate speech posts won’t this constitute posing limitations on freedom of speech?

Freedom of expression should have limitations if it leads to extremism. Also, developed western societies only lose to the emergence of fake news which seem like the tool for populist leaders to gain favor with the public. We have a responsibility to educate and advise less advanced users to stop using fake information portals.


Image source

Enernet – the basis for smart cities

If you are thinking about profitable and secure long-term investment, the energy sector might be what you are looking for. Nearly 30 years ago the hot topic was the internet, but today it’s the enernet. Let’s first clarify what is enernet – a dynamic, distributed multi-party energy network built around clean energy production, storage and distribution and serving as major foundation for smart cities.

In large, oligopolistic markets the only way to compete with the establishment is through innovation. In the 90s such small-sized innovators were able to fundamentally change the way we access the internet – from analog to digital via existing cable networks.

Today, for example, SolarCity is trying to drastically reduce and potentially get us off our nuclear power and fossil fuel needs. By using smart equipment the dependency on the existing power grid is also reduced. This very same grid is also costly to maintain and not prepared to meet the energy needs of tomorrow. The mind behind SolarCity is one of the leading visionaries today – Elon Musk. His more famous companies are Tesla and SpaceX, like SolarCity, aiming at leading humanity towards sustainable future. Recently SolarCity introduced solar roof tiles that through a home system with storage and electric vehicle charging can help houses (and cars) become independent energy-wise.

Innovation goes beyond Musk’s visionary companies – enernet discoveries are only beginning to emerge. Currently there are nanogrids, microgrids, distributed energy resources and virtual power plants under development. These inventions will drive down production costs for many businesses. Also, it will make the investment in energy self-sufficient home even more feasible, as it will pay-off the initial expense much faster.

The logical outcome of these discoveries is smart cities that are healthier and more secure. Development will be multidimensional, as energy security and grid stability will be able to withstand superstorms and cyberattacks.

There is, of course, doubt that it all will be sunshine and roses. Pessimists stick to the idea that it would take too much time and resources to implement these technologies in our daily lives. Looking at the internet delivery system again – the transformation from analog to digital was ridiculously fast and cheap. Economies of scale will continue accelerating this process until every home is riding “the green wave”.

Utility companies that have electric grids and distribution networks have a huge opportunity in front of them to be still relevant. Why should they not feel any pressure? Today the market is open, but the initial investment is so huge that no small- or mid-sized enterprises cannot even consider entering. If established businesses approach the situation intelligently, they will still generate money, not nearly as much as in the current oligopolistic situation, but enough to be profitable.

Fossil-fuel companies are also at risk of losing their enormous profits, but as one of the leading polluters to the planet we can trade our species’ existence for corporate profits.

Enernet innovation, like most other innovations, is usually not grown but acquired. There is simply no other way to go, as it’s cleaner and cheaper.


Image source

How AI can help revolutionize education

Cartoon Character Cute Robot Isolated on Grey Gradient Background. Writer. Vector EPS 10.

AI is already an inseparable part of our lives – it’s in our cars, our homes and our phones. Major tech companies have their own AI, implemented in a variety of their products – whether it’s Siri, Alexa or Google Assistant. These programs are becoming increasingly sufficient to answer our questions, so how does this translate in our educational system? They can help us find accurate information, so why are schools not jumping at this exciting new opportunity?

Our educational system is currently stuck in its well-tested “ancient” methodology where by default teachers are the one and only true source of information. Even though minimized in terms of exposure, personal beliefs of educators can still influence how information is conveyed. For example I had a teacher who loved providing religious examples, even though at a very young age I had atheistic beliefs. But this is still a small problem, compared to the job-related skills of tomorrow. The World Economic Forum estimated that 65% of primary school students today will have jobs that do not exist today. It turns out that while knowledge will always be valuable, flexibility and the ability to retrain will be even more important. Finland has abolished the passive learning paradigm and students work in groups, learning problem solving, under the guidance of a teacher. This step is definitely In the right direction in terms of personal development and acquiring a skill set that will serve well in the future.

We still have not reached singularity – the point, where algorithms will be able to learn by themselves. Today AI can provide you with information within seconds, whereas 30 years ago people had to go to the library. More often than not, however, personal assistants either cannot find the right answer or the one they provide is wrong. One still does not have to go to the library, but rather use search engines. Today we have enormous amounts of information and cheap computing power so it is no wonder that an ever-increasing number of companies are investing in creating their own AI – algorithms that “learn how to learn” in order to navigate this vast sea of information.

AI is far from what people would like it to be right now. Machines still need human guidance and are mostly used for people to improve their productivity. Since the dawn of time we have used tools to improve our work. However, now is the only time in history where the tools are being developed at warp-speed and one way or another we will reach singularity. So we have to tread carefully. Until this happens we have to train the only superintelligence we have – the human brain and its carriers – the people. This means radically reevaluating our educational system. It is estimated that creativity will become the most important job skill so lets minimize the risk of creating more future unemployment.


Image source

Our progress in the quest for creating Artificial Intelligence

Over the recent past there have been many mentions of machine learning, one might even think that something groundbreaking has been discovered. In reality it’s almost been around as long as we have computers and, no, nothing incredible has been discovered lately. Long ago Alan Turing asked whether machines can think and we have certainly come a long way in the pursuit of creating an artificial conscience, but still not quite there yet. This discovery might help us crack the mystery of our own mind and perhaps the eternal question “Why are we here?”. Apart from the philosophical implications, in this article we would like to shed some light on some aspects of Artificial Inteligence.

If my data is enormous, can Intelligence be created ?

Initial attempts at creating AI consisted of letting machines full of information run and hope for a positive outcome. Based on our limited knowledge of the universe and our place in it, this does not at all far-fetched. We are a result of entropy – given billions of years living matter emerged out of inanimate one. The concept is somewhat similar with two big restrictions – time is not unlimited and there is a memory capacity on these machines(as opposed to a seemingly endless universe). Google might be the pinnacle in this endeavor, but our search engines won’t evolve their own conscious.

In broader terms, machine learning consists of reasoning and generalizing, based on initial sets of information, applied to new data. Neural networks, deep learning and reinforcement learning all represent machine learning as they create systems, capable of analyzing new information.

Some 60 years ago, processing power was a fraction of what we have now, big-data was nonexistent and algorithms were primitive. In this setting, advancing in machine learning was nearly impossible, but people kept going. In recent decades we had neurology help advance neural networks. Machine learning patterns can be broken into classification or regression. Both methods work with previously provided data. The first class categorizes information, while the second develops trends that then help make prognosis for the future.

Frank Rosenblatt’s perceptron is an example of linear classifier – it’s predictions are based on a linear prediction function that splits data into multiple parts. The perceptron takes objective features (length, weight, color etc.) and gives them a value. It then works with those values until an accepted output is achieved – one, fitting into predefined boundaries.

Even people working in this field find it confusing

Neural networks are many perceptrons that work together, creating similar structure to the neurons in our brains. In more recent years scientists tried to create AI by mimicking how our conscience works – or at least as far as we know.

Deep learning has been the next big thing in AI development. These are neural networks with more layers, adding more levels of abstraction. It is important to remember that a computer does not consider the traits that human would between two or more objects – machines need abstraction in order to fulfill their task. This difference in perception is perhaps the final frontier to developing an AI, capable of passing Turing’s test.

Despite our solid progress, there is a long way to go. The black box of machine learning is an example of issue that we still can’t quite figure out. We can say the exact same thing about the human mind. The good news is that scientists are working on both problems and not knowing something has never stopped us into digging deeper and ultimately finding the answers we are looking for.

One more business-oriented view on algorithms

In this article we would like to discuss some of the business significance of artificial intelligence or how complex analytics tools help data scientists make sense of large amounts of information whether it is historical, transactional or machine-generated.

When used properly, these algorithms help detect previously undetectable patterns for example customer buying behavior or similarities between allegedly different cyber attacks.

One big problem related to algorithms is their short life span. Besides great math, one also needs to be constantly aware which algorithms are becoming obsolete and replace them with new better ones. This needs to be done continuously and quickly as in today’s complex business environment only the most high-yielding algorithms will survive.

In a hacker attack the defense system is updated to neutralize the threat. But hackers always come back with different approach. In order to stay secure, companies have to develop algorithms at least as quickly as the cybercriminals. On Wall Street even the best algorithms are profitable for 6 weeks at the most. In this period competitors are able to reverse-engineer and exploit it. Algorithmic efficiency is what’s most important in cybersecurity. One well-known case is with a company named Target – it’s systems were able to detect the hack but they had no algorithm for separating the real hacking from unimportant errors as the spewed information was simply too voluminous.

In the financial services stakes are extremely high so algorithmic fraud detection is also highly developed. Knight Capital, an American global financial services company, lost approximately $440 in 45 minutes back in 2012. Needless to say, this company is non-existent, whereas algorithm innovation only becomes larger.

It’s essential to come up with a way of easily compiling data with a sieve for enormous amounts of irrelevant information. As systems present new patterns of failure, one can improve on what was already done with an innovative approach.

Another field that shows rapid algorithmic development is predictive maintenance. Internet of Things is expected to be part of our household items by 2020, but is already seeing industrial use. Algorithms are developed to identify initial signs of system failure and alert the command center.

There are a number of tools that help in the development of algorithms:

  • visual analytics – relates to pattern recognition being used in real time to explore enormous amounts of historical data and connect usable models
  • streaming analytics – inserting algorithms directly into streaming data in order to monitor it live and isolate patterns or detect potential threats
  • predictive analytics networks – a place where data scientists help each other and polish their algorithms with a degree of reciprocity
  • continuous streaming data marts – used to monitor an algorithm in real time with the possibility of calibrating it

Top companies are continuously improving on their algorithms. In an environment they created, only the fittest algorithms survive and they will drive smarter and sounder business choices.


Image source

Product systems and interconnectedness – logical development of smart devices

As far as 15 years ago we had slow connectivity and expensive storage. Whenever thinking about the future of computing two major trends were feasible – either storage would become so cheap so every device could store gigantic amounts of data or connectivity would become fast and widely spread so information is stored remotely.

This concept could easily apply to one very new and attractive market – will self-driving cars make decisions based on traffic conditions and optimal routes or send data to the cloud and get feedback? Same question applies to more advanced robots and what will be the benchmark for complex Artificial Intelligence.

Today we observe a trend which includes both big storage capabilities and fast connectivity, but with scales tipping towards cloud-based solutions. Naturally, Smartphones are becoming more advanced but the speed of hardware improvement is not incredible and most of the advancements are software-related.

But beside complex multi-system machines today we also have incredibly simple tools like Google Home which consist of microphones and speakers, combined with connection to the cloud – the place where all the work is done.

We should alter the way we look at electronics in terms of software and hardware to new systems of devices, operating via programs and most importantly – partnerships. In order to clarify this statement we will examine the way mass technology changed in four main topics – analog, digital, smart development and personalized digital systems.


Before digital media we had devices, capable of doing only one thing – a TV, Walkman or later Discman. Almost every year there was one new amazing device improving on the experience provided by its predecessors. One interesting fact is that at that time record stores had to adapt by selling several physical formats of the same album just to keep up with technological advancements while still reaching wide customer base.


Digitalization really changed what was established in the previous period and made life much simpler. One mp3 player replaced the Walkman and Discman and later Smartphones virtually became a computer in your pocket. Yes, some “dinosaurs” expressed nostalgia, but they either disappeared or quickly embraced the benefits of one device having more than a single functionality.

Smart Development

With the smartphone becoming an inseparable part of everyday life, capable of serving from a flashlight to a small inch TV, any other non-smart device needed improvement in order to justify its existence for the consumer. ChromeCast improving TVs or the emerging of Smartwatches are just examples for companies adjusting to a complex environment. It is only logical to focus on partnerships between these devices, solely based on their wide functionalities.

Personal digital systems

Tesla and Echo are companies revolutionizing our view of consumer products. Thanks to machine learning, one can buy a product that gets better by itself over time. These are items developed with software and hardware by companies that came to the conclusion that products are more appealing when they provide access to a system. It also adds to customer loyalty as this system is part of the same corporate family.

Digital money will work only if retailers are ready to accept it. Smartwatches and Smartphones are the kind of devices that should be embraced as personal access tools to one’s online boarding pass for example. An important question for the future is not what the divice will be capable to do, but to what extend of his personal information will the owner allow to have this digital “key”.

One thing is certain – the future holds more interconnectedness than we could ever imagine and it is up to us to make sure we do It in a way that cannot be exploited by third parties.



Image source

Some major challenges the IoT will bring

In our previous posts we mentioned predictions for around 20 billion IoT devices connected by 2020. Today this forecast may even be pessimistic, since the Business Insider quotes numbers above 30 billion. There will be an enormous opportunity for better energy efficiency and data security. New challenges regarding the rapid growth of IoT have to be quickly understood as they might drastically slow the implementation process.

In this article we will examine some of these challenges.

Device authentication

One very important feature of IoT ecosystem security is identifying devices and thus preventing “outsiders” from entering it. Today authentication is done on cloud-based servers, proving a reliable choice when there are tens of devices connected. However, if you pile up thousands or millions of IoT devices, authentication can become a liability. In terms of security we are not advanced enough as current practices include Internet connection which drains batteries and is totally useless when the connection drops. On top of that, people in general do not understand the issue with scaling, according to Ken Tola, the CEO of the IoT security startup Phantom. He says that working on a peer-base could easily handle big scalability. This means moving functionality between IoT devices – whenever authentication is necessary, it can happen at the same time between millions of devices without requiring Internet connection.

The same startup is working on M2M (machine-to-machine) connection which is a security layer able to identify two devices in peer-based manner, identifying levels and types of communication between them.

Wireless communications

IoT is the logical expansion of the Internet from our computers to our appliances. It will digitize some of our everyday activities via wireless connectivity. Majority of IoT devices depend on radio frequencies such as Bluetooth and Wi-Fi. However, RF-based devices are shutting each other down due to interference and this problem will only grow with the addition of other IoT appliances. One current solution can consist of an additional bandwidth of 5GHz for Wi-Fi, but as projected number of devices grow more and more, interference will persist. Another issue relates to power consumption, as IoT items use batteries.

An alternative is instead of using Near Field Magnetic Induction (NFMI) for data transfer to be substituted with RF.NMFI, whose signal decays much faster and thus much of the interference is gone. NFMI creates a wireless “bubble” in which IoT devices connect and outside signals are ignored. Also, security protocols are active within the bubble, drastically reducing threats, while fast signal decay allows the same frequency to be used for a different device. On a side note, NFMI has been used in hearing aids and pacemakers for more than a decade, but it might be the key to revolutionizing IoT.

Traffic administration

Managing IoT devices can quickly become impossible if it is not taken into account at the earliest stages of implementation. Smart homes are one thing, but we will live in smart cities where parking automates or traffic sensors will also transmit data. Administration and integration should be as simple as possible. The biggest potential issue is many devices transmitting data at the same time, but as we mentioned, RF.NMFI might be the key.

Startups use machine learning (Artificial Intelligence) for managing complex automated networks that will consist of thousands IoT devices. The algorithm will provide real-time distribution system control and self-managing means for big networks,geographically spread over vast distances.

Most of the technologies we use will further develop to a level supporting the needs of an interconnected world. Those that are left behind will be replaced by ones ready to take on the challenges and opportunities the IoT will bring.


Image source

YipiY voting widget on

VROUW, a popular online lifestyle magazine for women in the Netherlands, has enriched its content with the YipiY voting widget.

We’ve restyled and implemented the tool, which is yet another variation of the Nieuwsbite platform.  YipiY, a company behind Nieuwsbite, offers the service as a white-label solution to online publishers to increase engagement with their audience as well as to build valuable analytics based on readers’ behavior.

About Vrouw
VROUW is a part of Telegraaf Media Groep N.V. (TMG), the largest news media companies in the Netherlands with strong brands including De Telegraaf, DFT, Telesport, Metro, Autovisie, Privé, VROUW, Sky Radio, Radio Veronica and Classic FM.