Evolution In Machines – What Next?

Introduction

Humankind as we know and experience today, has evolved over millennia from our primitive biological ancestors to our current biological selves. This evolution has gone through many stages and phases that has lead to our dominance and success on this planet.

The evolution and advancement of Cognition and Language played a significant role in establishing Humans as the dominant species while having a profound effect on the Humankind’s Evolutionary Journey.

Cognition or the ability to learn and gain knowledge through a ‘thought process’ helped humans thrive better and more efficiently than other species. This resulted in the early discovery, invention, and development of societies, tools, agriculture and other advancements. This cognitive ability enabled us to develop and communicate with a common language that irreversibly gave us a boost towards becoming the dominant species on earth.

Language, or a systematic approach to clearly communicate within a species – has shaped to what we use now, over thousands of years. Different communities and societies developed and shaped different languages that we have come to know today. Despite the differences, it is evident that ‘language’ was crucial to the development of humans as a social.

A combination of language and cognition enabled early humans to rally forces, build societies, understand obstacles, explore their surroundings while analyzing them, etc., The ability to instruct and impart played a crucial role in the development of civilizations and societies. This development can be broken down into three steps

  • The ability to express one’s thoughts and ideas
  • The ability to understand the expressed thoughts and ideas
  • The ability to function, based on the said thoughts and ideas

These enabled early humans to gather masses and work efficiently and productively. Tasks that were near impossible to be done by a small group or a single human could now be handled well; thanks to the amassment of more humans. Many modes of communications were used to achieve the desired results – primitive signs, early languages, non-verbal pictorial representations (early writing systems), etc., However, amongst these modes, the one that stood the test of time to emerge the forerunner was ‘language’ in the form of Speech. This clearly has been at the helm of our evolution and steered us to our current position on earth.

This evolution has also interfered with and altered the evolutionary paths of all the elements that surround us or make up our planet; flora, fauna, rivers, mountains, language, writing, science, human inventions, etc. Human inventions and discoveries, especially have evolved with a similar pace as that of our; tools, agriculture, cooking, trade, engines, automobiles, computers, space travel and everything in between these and around these. At every stage of this human evolution and endeavour there has been one stand out invention or discovery that propelled us into the future; faster and further; Stone tools to iron tools in the early stages, agriculture and cooking when we started societies, weapons, and trade when we started building civilizations, engines, and mechanics for our industrial revolutions, electronics and computer science for our modern age. etc.

Amongst the discoveries and inventions, ‘Electronics and Computer Science’ have had a far-reaching effect on our population and have impacted our day to day life drastically. Over the years they have become very personal to us and they have proliferated our lives and environments drastically. From television sets, radios and telephones to personal computers, mobile phones, and satellites, we are surrounded by electronics and computer science every day. They have about them a uniqueness; we communicate with them and to them in ways that were not done before with other inventions.

Today we see ‘machine learning’ and ‘artificial intelligence’ enabling us to add cognition and push them towards a cognitive revolution. We are enabling machines to learn from experiences and make judgments on their own; making them more independent and more useful to us. We already have machines that can suggest the movies we like, drive cars, detect cancer early, etc, and this is possible due to the idea of cognition that we have built into those systems using machine learning and artificial intelligence.

We have made these modern machines different from other earlier machines because of their ability “think” and the way in which we are able to “communicate” with them. We do not use levers or knobs, reminiscent of early machinery; instead, we type out messages or instructions in a language familiar to us. This mode of communication has evolved over time; from punching to typing to clicking to voice.

Machines understanding human language through our speech is the next big step in the evolution of electronics and computer science. The combination of cognition and voice recognition in devices have ensured that we can communicate; not just instruct, and in a language that we use and understand best.

Most early machinery and devices were designed and developed to ease the user in its usage. Until recently, using advanced personal devices required us to be in physical contact with the device, know basic operations and understand its basic layout and structure. This made devices unreachable or unrelatable to many. The combination of cognition and voice recognition will now enable us to use devices with just our voice, making it accessible to many, thus breaking down the barrier many might have faced earlier

The applications of such devices are immense. We believe, like the events that helped humans as a species, leapfrog in its evolution; cognition and voice recognition in machines will change the way we interact with devices and how they will have a lasting impact on our lives.

How can we help you?

Top Big Data Analytics trends in 2019

2018 bought to fore a range of changes with reference to data. The significance of information within organizations was on a rise, and so were megatrends such as IoT, Big Data and Machine Learning. Integration and governance of cloud are significant data initiatives which achieved a new high as well.

What big data has in store for 2019 hence comes across as a point of interest.

The top trends are likely to be in continuation of what was witnessed in 2018. We can also look forward to new developments which pertain to even more data sources and types. The need for integration and cost optimization will increase, and the organizations would be using even more advanced analytics and insights.

Let us take a look at the top trends in big data analytics in 2019.

1. Internet of Things (IoT)

IoT was a booming technology in 2018. It has significant implications on data and a number of organizations are undertaking efforts to tap the potential of IoT. The data offered by IoT will reach a high, and it is likely that organizations will continue to face a difficulty in putting the data to avail with their existing data warehouses.

The growth of digital twins is likely to come across issues of a similar nature. Digital twins are digital replicas of people, places or just about any kind of physical objects. A few of the experts estimate that by the year 2020, the numbers of connected servers will exceed 20 billion. In order to substantiate the value of the data, it would be essential to integrate it into a modern data platform. This would have to be achieved by the use of a solution for automated data integration, which would enable unification of unstructured sources, de duplication and data cleaning.

2. Augmented Analytics

In 2018, a majority of qualitative insights were not taken into consideration by data scientists, following analysis of large amounts of data.

But as the shift towards augmented data gains a greater prominence, systems will use machine learning and AI to yield some insights in advance. This will, with passage of time come across as an important trait of data analytics, management, preparation and business process management. It may even give rise to interfaces, wherein users will be able to query data using speech.

3. Use of Dark Data

Dark data is the information that organizations, collect, store or process as well as resulting from their everyday business activities, but are unable to use for any applications. The data is collected vastly with the intention of compliance and while it takes up a significant amount of storage, it is not monetized in any way to yield a competitive advantage for a firm.

In 2019, we are likely to see even more emphasis on dark data. This may include digitalization of analog records, such as old files and fossils in museums, following their integration into data warehouses.

4. Cost optimization of the Cloud

Migration of a data warehouse to the cloud is less expensive than saving it on-premise, but the cloud can be further optimized still. In 2019, cold data storage solutions, such as Google Nearline and Coldline will be coming into prominence. This will let organization save 50% of expenses towards saving the data.

5. Edge Computing

Edge computing refers to processing information close to the sensors and uses proximity to the best advantage. It works towards reducing network traffic and keeps the system performance optimal. In 2019, edge computing will come to fore and cloud computing will become more of a complimentary model. Cloud services will go beyond centralised servers and become a part of on-premise servers as well. This augurs well for cost optimization and server performance alike for organizations.

A few of the experts believe that with a decentralized approach, edge computing and analytics comes across as a potential solution for data security as well. But an important point to be noted in this regard is that edge computing enhances the number of potential access point for hackers. A majority of edge devices are lacking in IT security protocols as well, which makes an organization more vulnerable to hacking.

Advances in edge computing have paved the way for even more requirement of a flexible data warehouse that can integrate all data types in order to run the analytics.

6. Data Storytelling

In 2019, with more and more organizations transferring their traditional data warehouses to the cloud, data visualization and storytelling are likely to advance to the next level. As a unified approach for data comes to fore as aided by cloud based data integration platforms and tools, it would enable even a larger number of employees to reveal accurate and relevant stories based upon the data.

With an enhancement of business integration tools that enable organizations to overcome issues related with data isolation, data-storytelling will become reliable, and in a position to influence business outcomes.

7. DataOps

DataOps came across as a prominent trend in 2018, and is expected to gain even more importance in 2019. This is in a direct proportion of the enhancement of complexity of data pipelines, which calls for even more tools for data integration and governance.

DataOps is characterized by application of Agile and DevOps methods across the lifecycle of data analytics. This initiates from collection, followed by preparation and analysis. Automated testing of the outcomes is the next step, which are then delivered to enhance the quality of data and data analytics.

DataOps is preferred because it facilitates collaboration of data and brings about continuous improvement. With a statistical process control, the data pipeline is monitored to ensure a consistent quality of data.

In order to leverage these trends to their optimum advantage, vast numbers of organizations are coming to realize that the traditional data warehouses call for an improvement. As resulting from a larger number of endpoints and edge devices, the number of data types has increased as well. Use of a flexible data platform hence becomes imperative to efficiently integrate all data sources and types.

How can we help you?

TypeScript and React – A perfect Match

Today, while we extensively use social media like facebook, twitter or others; our screen, page, feed, etc., is constantly being updated with the latest news, shares, articles, or other latest updates. This is an essential element contributing towards the success of any social media platform. If one were to stop and think about this; it seems very simple and rudimentary, but they are in fact highly expensive in terms of performance. These continuous live update of the front end are technically called DOM operations and they are crucial for the smooth performance of a page.

React

A Javascript library for building UIs comes as a welcome relief to overcome this issue and it is currently one of the popular libraries in JavaScript. React makes it painless to create interactive UIs. The component logic is written in JavaScript instead of templates, so we can easily pass rich data through the application and keep the application state out of the DOM. The declarative style of React component makes it easy to debug.

However, all the React components are written in JavaScript and they are coupled with the problems associated with javascript.

To tackle this tricky problem a combination of React and TypeScript can be used as it is efficient and it can improve the maintainability of React projects considerably.

TypeScript

Every programmer who has ever written code knows the challenges and inadvertent delays caused while compiling or while run the code. It could be missing integer, a misplaced letter or simple improper use of casing. These tiny, but critical errors on the programmers part, can lead to frustrating time delays which in turn could seriously affect the outcome of your solution. Especially when it comes to JavaScript, the time taken to identify and solve a problem is larger because of its ‘dynamic typing’ nature.

TypeScript lets you write JavaScript the way you imagine and process command or task. It is a typed superset of JavaScript that compiles to plain JavaScript. It is also pure object-oriented with classes, interfaces and it is statically typed like C# or Java.

Another popular JavaScript framework Angular 2.0 is written in TypeScript. It helps javascript programmers to write object-oriented programs and have them compiled to JavaScript, both on the server side and client side.

Salient features like type definitions – make it easier to refactor the variable names, which is a hard task in JavaScript, while Intellisense (Autocomplete and type error detection) – supports TypeScript and is an effective time-saver during compilation.

For example, TypeScript avoids unintentional errors like typos. Javascript will accept any attribute name to that object but TypeScript allows only the available attributes of the type.

In the below code, there is a typo. The programmer has typed inrecieve instead ofreceive

Advantages of TypeScript:

Typescript will provide compile-time errors for most common problems in a React project, such as:

  • All required properties for a React component is not supplied from parent
  • Property supplied as a different type from the parent component
  • The extra property which is supplied to React component from the parent (This will avoid proptypes library which is commonly used in react projects)

If we are using Visual Studio code (VS code) for the react-typescript combination, even the above-mentioned problems will be shown as inline errors which further reduces the time taken to figure out the mistakes.

See the screenshot from VS code:

Showing-inline-errors-dueto-typemismatch

  • Autocomplete features for typescript is well advanced than JavaScript.
  • State of a react component is defined as a TypeScript interface. This will avoid problems due to null values in states of react component. Typescript will throw an error at compile time if we did not give the default state values at initialization.

Drawbacks on TypeScript:

Even though there are many advantages, TypeScript also has some drawbacks when we start using it in on a large scale. Without the type declarations for the exported attributes and methods for a third party library, we do not get to fully utilize the benefit of TypeScript. So if there is a library which does not have any type definitions, we need to write it our own or look for alternative libraries which provides type definitions. From our experience in web projects, most of the type definitions are available as node modules, thanks to the contributors of open source community. If the project is a React- Native project, things get complicated further due to the availability of type definitions.

How can we help you?

How Can Oracle’s 2019 Java SE Licensing Affect You?

In 2018, Oracle, a leading American multinational computer technology corporation, released a new pricing model for the Standard Edition of Java SE commercial. The company announced that in January 2019, the users of Java Open Source shouldbuy a license for them to receive updates. The news triggered many businesses to take a closer look at their Java Open Source usage and attempt to plot action plans for the Java development kit migration in 2019.

In this article, we’ll analyze every single little detail about Java SE licensing update and consider all the necessary factors to consider as the changes are implemented, including the parties that will be affected by the changes, the actions commercial Java SE users can do to stay compliant to the critical updates, and all the changes in general.

What Are the Changes to the Commercial Java SE Model?

Users have previously known three Java SE products, namely the Java SE Advanced Desktop, the Java SE Advanced, and the Java SE Suite. Before the changes, these three models required users to avail of upfront licenses and annual support. Just this January, those models were replaced by two new models, namely Java SE Desktop Subscription and Java SE Subscription, and they are subscription based.

The important changes include:

  • New Java SE Subscription Pricing
  • New Java SE Subscription Licensing Structure
  • Changes to Public Updates

Which Parties Will Be Affected by the Changes?

Not only the legacy Oracle customers but also all the commercial Java Open Source Code users are expected to be greatly impacted by this change. The good thing about this change though is that customers who use the old Java Open Source Code models will not be forced to shift to the subscription model. Although the two models are the only Java Open Source Code options available for new customers in 2019 and perhaps in the coming years, old customers do not necessarily have to switch to them. However, there may be a number of different reasons to consider a switch. Considering this, it’s important for commercial Java SE users to be aware of the difference of the licensing and pricing.

If you’re using Java SE for non-commercial use under a restrictive scenario, you may have the right to use the Java Open Source Code without paying any fee. However, activating Java’s ‘commercial features’ requires a license. For this reason, it’s advisable to check that you are not using commercial features and that you are abiding by Oracle’s Java licensing policies.

What are the Details of the New Java SE Licensing Structure?

With the new model, you no longer have to purchase a license upfront and pay an annual fee for Java Migration. You will, instead, pay a monthly subscription under terms of one to three years for desktop or server licensing and support. Failure to renew the subscription after the given time period will result in the user losing rights to any commercial software downloaded throughout the subscription period and access to the updates of Java Migration SE and to the Oracle Support.

How Are Java SE Licensing Requirements Calculated?

In the new Java SE subscription models, customers get to choose between desktop and server deployments. Desktop deployments use a Named User Plus metric while server deployments use a processor-based metric to calculate the Java Migration SE license requirements.

The metrics above have the same definition as the standard Oracle technology products. However, NUP minimums still don’t exist. A number of desktop computers and laptops will most likely count NUP licenses in organizations.

What Java Licensing Looks Like?

To answer this question, the guardian should have the right data on the JDK environment. Here are some of the important questions to ask about Java licensing.

  • Where was Java used?
  • Where was Java installed?
  • Which version of Java do you have in your environment?
  • What are the applications that are integrated with Java?
  • How many users are there?

The End of Oracle’s Java Public Updates

According to Oracle’s Java Updates Roadmap, the public availability of the updates will be open again in January 2019, and it did open last month. This means that Java SE 8 commercial users will not receive any critical update after last month, and this can put business operations at risk. In this situation, businesses can either purchase Java subscription licenses or completely move Java SE onto an alternative platform like Oracle OpenJDK or vice versa. The Oracle JDK to open source JDK migration involves using OpenJDK environment and making the open source migrationsuccessful.

Action Items

If you’re an existing Java SE user, it’s important for you to conduct internal assessments of your current Java development to not only ensure your compliance of the license but also determine if shifting to the new subscription model is more cost-effective.

If you want your requirements for commercial use to grow, you need to consider shifting to the subscription model. Should you switch, you are free to use the CPU or NUP based subscription. This is to determine which among the desktop or server-based subscriptions is better for your environment. Your choice depends on your licensing requirements.

If you feel like you need assurance for being a commercial user, it’s advisable for you to conduct an internal assessment. This is because of the organizations that run Java SE’s free version.

To secure your safety, let your legal team confirm that the Java licensing policies of Oracle allow your team to use Java SE without purchasing the commercial licenses.

Tips for Java Migration

Not all Java users know the ins and outs of Java migration, but experts know the right processes involved in Oracle JDK to open source JDK migration to make the open source migration smooth and successful with the open source Java development kit.

Before OpenJDK Migration:

Before the OpenJDK migration, it’s advisable to develop a continuous integration JDK environment to build a JDK source code online and run open source migration and unit tests against an open JDK environment.

It’s also ideal to prepare a list of dependencies with the use of build tools like Java development kit migration and then perform inventory analysis.

During OpenJDK Migration:

Conduct a performance test on your app that runs an open source migration. Make sure the performance test scripts have been appropriately updated when pushing the JDK source code online.

Also, thoroughly test any Oracle JDK to open source JDK migration and beware of the quirks with the algorithm of the memory management between the Java Development Kit Migration

After OpenJDK Migration:

Double check the Oracle JDK to open source JDK migration and the JDK environment if every aspect of the Java development kit migration has been successful.

The Java development kit migration is not an easy task – not even for the experts. Nevertheless, it’s a doable task that can be successfully performed with the right open source Java development kit.

We here at Tarams Software Technologies help companies migrate from Oracle JDK to OpenJDK. We understand the need of the hour and our in-house experts are always ready to answer your queries and assist you in achieving your business goals.

How can we help you?