Our Insights Post 5 mins Dave Thorpe Branch Roundtable – Managing Legacy Data Silo’s write up Branch’s February roundtable took time out to focus on the management of legacy data silos and the implications for businesses who have them. We were joined at the table by 5 CTOs from Manchester’s incredible tech ecosystem and it was great to see the conversation flowing throughout; though it was surprising how quickly they all managed to agree! In today’s world, data is the new oil, and businesses are generating vast amounts of it. However, with the increasing volume of data, companies often face a significant challenge of managing and accessing data silos. Data silos refer to the isolation of data in separate systems, applications, or departments, making it challenging for organisations to retrieve, manage and analyse their data. This fragmentation of data can lead to costly errors, lost opportunities and inefficient decision-making. The first step towards managing data silos is to develop a data strategy. What is the driving business need for your data? What task have you been set and what is this going to accomplish? Identifying the data you have, the silos it’s in and the discrepancies between those sets of data is often a strong place to start. This allows you to see the task in front of you and will allow you to create a roadmap for consolidating them. The strategy should also define the data governance policies and processes for ensuring data accuracy, consistency and security across the organisation. By establishing clear data governance policies, organisations can ensure that data is used effectively and efficiently. An excellent point of debate within our discussion was the argument between merging your data or combining data from multiple sources into a single unified database. The fear amongst the panel was that in merging data it is difficult to keep the data at its original quality . So if Silo A had a quality score of 100 and Silo B had a quality score of 100 it is unlikely that both will retain such a high score post-merge. The quality of your original data has been lost, likely forever! A more popular approach amongst the group was to build a layer that sits above your data silos that integrate with them all. This second approach means that as your business acquires more data it can be more easily brought inline with your data access point without the need to merge. On the whole this seemed to be the simplest way to proceed with most business needs. It protects your sources of data but allows you to converge all business needs into the access point. This also allows you to keep business units separate from a data perspective; useful if there is a chance individual business units might be acquired by a third party at some point in the future. Talking of acquisitions; data is regularly acquired when companies merge and therefore different standards and systems have been set up by each company. It can be difficult to merge practices across teams and therefore supports further the idea of a data access layer that each team can set their data up to rather than merging data or practices. Ensuring data quality is essential to managing data silos effectively. Poor data quality can lead to errors, inconsistencies and inaccurate insights. Therefore, it’s crucial to establish data quality standards and processes to ensure that data is accurate, complete, and consistent. Data quality checks should be performed regularly, and any issues should be addressed promptly. By ensuring data quality, organisations can rely on their data to make informed decisions. We are now seeing data standardisation across certain sectors due to market dominance. A great example given by the group was Amazon. Their dominance has many retailers aligning their data standards for ease of integration. This is the opposite of how, let’s say, supermarkets have been operating who do not have the same standards due to no single one of them having dominance in the market. This creates an interesting issue in data requirements across the different markets and many believe that many tasks could be made easier with some more standardisation. Finally, it is essential to foster a data-driven culture within the organisation. This involves creating awareness about the importance of data and how it can be used to improve decision-making. Our discussion led us to identify a gap in the job market for data analysts within businesses that take ownership of the data, its storage and implementation in business. There was a fear amongst our tech leaders that “new school” programmers have less of an appreciation for data, how it works and optimising data storage when computing resources are now comparatively huge. Some wouldn’t even know how many bits make a byte according to one! It was identified also that many tech leaders are worried and even scared of changing practices with data. The old adage of if something isn’t broken don’t fix it comes through but actually what this has created is a huge amount of systems created to work with outdated or poorly implemented systems. There is too much risk to change these fundamentals and therefore we are constantly fighting to worry about the needs of business today and what would help in the long run. This is a challenge that we are going to continue to face and need to be prepared for. We might not always be able to help businesses change the way they do things and have to compromise in the technology we are providing for them. If you are interested in attending any of our Branch Events please follow the link: Branch Events