Search
Close this search box.

Vendor comparison address/data quality

[siteorigin_widget class=”WP_Widget_Custom_HTML”][/siteorigin_widget]

To see the address and data quality landscape in full size, click here Adress- and Data Quality Landscape

[siteorigin_widget class=”WP_Widget_Custom_HTML”][/siteorigin_widget]

Additional information on the individual providers:

ABIS

  • Filling level analyses: checking and enrichment of the salutation, recognition and structuring of the name and address components and date of birth, checking and correction of postal addresses, telephone numbers and e-mail addresses
  • Quality analyses: name analyses (correctness of first and last names, static age, company name recognition), analysis of postal addresses, telephone numbers, e-mails, deliverability and confirmation of postal addresses
  • enrichment through external data: Telephone numbers (public sources), deliverability of postal address, information on relocation and deceased persons (internal: Deutsche Post Adress GmbH & Co. KG, ABIS GmbH, external: SAZ, eXotarget, Gemini, Axiom), telephone number and email address validation
  • Reporting: In the form of a standard report (ABIS audit) or individually according to requirements
  • Integration into an e-commerce or CRM software landscape: As an interface or software, currently as a customer-specific solution

AS Address Solutions

  • Duplicate recognition: Knowledge and rule-based duplicate recognition, i.e. before names are compared with each other, it is first examined exactly what is what in the name, e.g. first and last names, titles, salutations, prefixes, place names, company proper names, legal forms or even company activity words are recognized, regardless of the field or position in the field in which they appear.
  • Flexible adjustment of match rules possible: After the knowledge and rule-based analysis of the field contents, the comparison is then carried out for each individual feature according to a procedure specially tailored to the field contents. The resulting partial scores can be used for any number of meaningful decision rules. Any number of decision rules can be combined into a decision matrix and then stored in a configuration and used for various duplicate identifications, stock comparisons or online searches. This means that virtually any business rule, no matter how complex, can be mapped and used for a duplicate. In addition to the subscores, individual minimum values for each field content and different weights for the respective subscores can also be defined.
  • User interface for duplicate processing: GUI available, which can also be customized.
  • Suggestion for merging by the system: Since this is a very individual task for each company, AS Address Solutions advises the customer on the appropriate implementation for duplicate merging. Nevertheless, there are of course already built-in procedures for merging, which are completely sufficient for a mailing, e.g. selection at random, priority-controlled or selection of the data record that has the highest score in comparison with the other group members.
  • Rules for merging duplicates: see above, the above procedures can then of course also be used for the final merging or the creation of a master record (golden record), if this is so desired by the customer. For the more complex requirements, which vary greatly from company to company, AS consults with highly qualified and experienced consultants, who can draw on a large pool of solutions for mergers that have already been carried out, from which the optimal merging for the respective customer can then be carried out very quickly.
  • Fill level analyses: Yes, of course, this is done with our AS Inspect, which immediately determines the fill level of each field and makes it available.
  • Quality analyses: Yes, there is a separate software program for this, the AS Inspect, which precisely checks the data quality, e.g. the duplicate rate, the number of correct or correctable postal addresses, key figures for the quality of names and other content, etc., which can be saved in descriptive reports and consulted for later re-comparisons. With AS Inspect, the customer is able to import any data source within a few minutes and with just a few mouse clicks in a clear GUI and then put it through its paces.
  • Enrichment with external data: Here, too, there is an easy way to search any reference data through the simplest integration of any reference data into a conveniently searchable, so-called AS search inventory (search index). Despite our company philosophy of remaining as vendor-independent as possible, we have some ready-made interfaces to selected reference data, such as postal data (Nexiga), the Swiss national file, geocoordinates or the Robinson list, but we are by no means limited to these.
  • Reporting on the analyses: In every AS software package there are reports on the most important results, e.g. in AS Inspect pdf reports on the fill level, the duplicate rate, the quality of postal addresses and names, etc.
  • Integration into a CRM system landscape is possible: Yes, of course this is possible at any time, in addition to direct so-called integration components, which make the basic functionalities available for a specific environment (e.g. for Oracle PL/SQL, Microsoft T-SQL), the AS offers all common interfaces for the simple and fast integration of the basic functions into practically any environment, into any application and of course into any CRM system. Integration components include the following: SOAP and RESTFul interfaces, XML/RPC, TCP/IP, COM, PL/SQL, T-SQL, etc.. With a certified SAP partner there is also a complete integration of our software in SAP/R3 since many years. We are not limited to specific CRM providers and solutions, but there are already successful implementations with Microsoft Dynamics, SAP R3 or Salesforce, among others.
  • Integration into an e-commerce system landscape is possible: Of course, due to the concept of integration components there are no limitations here either.You can find out more about Address Solutions here.

ataccama

  • Any type of analysis can be performed (e.g. fill level, timeliness or consistency checks). These checks can be fully tailored to customer needs.
  • Any external source can be used for enrichment for all data types.
  • Quality analysis results can be accessed through our own dashboard, but they can also be exported to external tools (including BI tools such as Tableau, PowerBI, Qlik, etc.) for further analysis or processing.
  • ataccama can be integrated with a variety of third-party tools in both directions (providing or accessing data) via APIs, files, or direct connection to a database. Quality checks can be made available as web services to CRM systems, allowing a data quality firewall to be created.
  • ataccama can be integrated with a variety of third-party tools in both directions (providing or accessing data) via APIs, files or direct connection to a database.ataccama’s quality checks can be provided as web services for e-commerce systems so that a data quality firewall can be created.

Deutsche Post Address

  • 100% identical to the subsidiary ABIS
    1. Additionally: „topicality“ and „internationality
    2. A core competence of Deutsche Post Adress is address updating. The relocation database POSTADRESS MOVE, based on redirection orders to Deutsche Post, is the database with the most new relocation notifications per year in Germany. Other reference data sources enable users to identify undeliverable addresses and addresses of deceased customers.
      Deutsche Post Adress also offers many services for master data maintenance and data quality optimization for international address data through its sales unit POSTADRESS GLOBAL. A global network of service providers enables companies to maintain addresses from almost every country in the world.

Deutsche Post Direkt

  • Option for application programming interface (API) for customer-specific on-premise setup
  • various enrichment of various additional information
  • Comprehensive reporting thanks to visualized address audit and/or customer profile analyses
  • Various integration paths for vendor-independent CRM system landscapes via API (REST, SOAP), web service or asynchronously for bulk files via SFTP
  • Established integrations for eCommerce applications via plugin for the following providers: Magento, Shopware, Plentymarkets or even GitHub for individually desired integration

bisnode

  • Fill level analyses: postcode, town and address
  • All firmographic features can be enriched by external data
  • Reporting can be called up via online analysis

eXotargets Data Network

Founded in 2012, eXotargets has become a major information aggregator in Germany. As a data provider, eXotargets offers a variety of data points for the areas of Datacare (address cleansing, data maintenance, supplementary relocation addresses, deceased register, building directory / street file / geo-coordinates ), Dataselect (lead generation, re-targeting) and eIDV (KYC, Electronic Identity Verification).

innoscale

  • Fill level analyses; data set oriented and attribute oriented
  • Quality analyses: duplicate recognition, completeness check, consistency check, plausibility check, syntax check, semantic check
  • Enrichment: Address data/vendor data: Deutsche Post Direkt, Melissa Data, Bisnode, Creditreform, Jaroso
  • Reporting: External and regular temporal triggering, ad-hoc reporting, standard data quality key figures as well as individual data quality key figures, recording and presentation of the temporal development of data quality key figures, aggregation
  • Customized interfaces to e-commerce and CRM software landscapes: Salesforce, SugarCRM, SAP ERP (IDOC), MS Dynamics, Oracle, Pepolesoft, PSIpenta, Sage ERP, Comarch ERP

Kroll software

  • Enrichment by external data possible via ODBC/OLEDB or CV
  • Reporting on the analyses: duplicates found are clearly displayed with the degree of agreement.
  • Kroll software is a standalone and can access any tables, therefore it can be connected to e-commerce or CRM software landscapes

omikron

  • Filling level analyses: absolute and percentage possible
  • enrichment through external data: Any data sources, e.g. insolvencies, relocation information, deliverability, phone numbers, deceased verification, commercial register information, company information (employees, industry, etc.) and geo coordinates.
  • Reporting: Capture performance data and KPIs for processes or as evaluation of the complete data stock. Support of third-party applications, e.g. Grafana or Power BI
  • Integration is possible: in Microsoft CRM, C4/HANA, Salesforce, Microsoft Business Central, SAP and many other systems via REST/SOAP Services, the frontend is simply possible

 

q.address

  • Duplicate detection: Any number of data sources, all common data formats and databases. Furthermore: postal address check, name check, check of further data (including UST-ID, IBAN, e-mail, domain, etc.), data formatting (telephone numbers, data, etc.)
  • search rules: Automatically generated rules for standard tasks, individual definition of rules: Individually configurable down to field level. Several rules can be applied in parallel (e.g. name / address, first name / mobile number, VAT ID). Hierarchical data structures (e.g. company/contact person), reference data and dictionaries (individual additions/adaptations), result postprocessing incl. team processing.
  • Interface for the user: Windows (Ribbon-based GUI) or command-line variant for integration into processing procedures.
  • Proposal for fusion: q.address supported:
  1. Selection: Selection of a data set (required, for example, if the data of several
    address suppliers to be merged)
  2. Merge: Merging at field level („merging without data loss“)
  3. In-place-Cleansing: Inventory Cleansing in Dynamics 365 CE (CRM), Salesforce, SAP CRM
  4. Rules: a) Priority at the level of the data source (file), record and data field, b) Priority according to record/data field quality, age etc.
  • Fill level analyses as counts
  • Quality analyses: Countings (incl. consideration of data quality, e.g. number and type of duplicates, correct postal codes, spelling etc.)
  • Enrichment through external data: beDirect, BDS Online, Creditreform. Furthermore: any other data provided
  • Reporting: Prepared analyses
  • Integration in CRM system landscapes: Ready integrations in Dynamics 365 Customer Engagement (CE/CRM), Dynamics 365 Business Central (BC/NAV), Salesforce, SAP CRM. Furthermore: For own integrations we offer: q.address Quality Server, q.address Cloud Services
  • Integration into an e-commerce system landscape possible via q.address Quality Server and q.address Cloud Services

relate with the product reDUB

  • Fill level analysis with field level for import
  • Quality analyses: e-mail, telephone or postal
  • enrichment through external data:
    • Age esimation
    • Sex code
    • ethnic assessment internal
    • Postal corrrection worldwide
    • Relocation, deceased etc. external
  • Reporting: Statistics
  • Integration into CRM system landscape: via interfaces, no API
  • Integration into e-commerce system landscape: via interfaces, no API

UNISERV

      • The fill level analyses work on each original field, as well as all validation results or enrichment potentials
      • Quality analyses: In addition to the validation, assurance and production of good data quality, there are additional solutions for the analysis of complete data sets and the presentation of the respective results in dedicated data quality dashboards and reports. Furthermore, any results can be imported into existing BI systems via interfaces.
      • Integration in e-commerce and CRM system environments is possible: All solutions from Uniserv can integrate a wide range of APIs in any system (CRM, ERP, e-commerce, MDM, etc.). Dedicated and certified plug&play connectors are available for the entire SAP platform. For Microsoft Dynamics, Salesforce, Aurea and many others, there is a partner network, as well as corresponding project experience.
      • central post-processing console or the datasteward console

Melissa

      • Fill level analyses: breaking down and sorting the components of names and addresses and checking and correcting personal data such as address, e-mail and telephone number and adding missing components (e.g. postcode)
      • Quality analyses: address validation, address auto-completion, e-mail validation, telephone number verification, name analysis, identity check, duplicate check and geocoding
      • Enrichment through external data: Melissa offers a wide range of data quality solutions in the field of address management. There are various partnerships and data sources for this purpose – in each case for the appropriate solutions.
      • Reporting on the analyses: On request, a Data Quality Report can be created to give you an insight into your data with the statistics. In addition, result codes are issued for the individual solutions, which show you the errors (e.g. errors in the postal code) and the changes (e.g. postal code changed).
      • Integration in a CRM landscape: As a Web Service, Microsoft Dynamics CRM, Oracle (including PepoleSoft)
      • Integration into an e-commerce system landscape: As REST interface. PlugIns, e.g. for Magento and Shopware

Further informations about Melissa Data here.

TOLERANT Software

      • Fill level analysis for master data (name, address, dates of birth)
      • Quality analyses for master data (name, address, dates of birth)
        • Listing of the characteristics mentioned
        • Minimum and maximum values
        • Correctness check
        • Check for completeness
      • Enrichment through external data:
        • Geocoordinates for addresses
        • statistical districts automotive industry
      • Integration in CRM landscape possible in:
        • MS Dynamics 360
        • Salesforce

 

loqate

  • Quality analyses: loqate outputs a quality code that says something about the quality of the address.
  • Reporting on the analyses: With the help of the quality code a reporting can be created. Reporting can also be created on the part of loqate.
  • Integrations with CRM and/or eCommerce systems are possible via plug-ins or a REST interface.

More about loqate can be found here.

 

 

 

Do you need help or do you still have open questions?
We are happy to put our know-how at your disposal.

 

Intro, why is the topic relevant, why is the topic strategically so important?

For all companies seeking direct contact with their customers, the customer database is the linchpin of coordinated sales and marketing activities. The conviction that quality – in particular the simple correctness of address data – plays an essential role in this process is finally gaining acceptance.

Customer Relationship Management, Data Driven Marketing and Sales, Value to the Customer or Database Marketing demands uncompromising quality and timeliness. The usual buzzwords include address validation, data cleaning, data cleansing, data quality tools, data analysis, optimal data collection by your own employees. But also the optimal support for self-service data entry by prospects and customers. This is the only way to ensure that e-mail addresses for e-mailings and e-mail marketing are captured correctly from the outset.

In July 2020 we created and published the first German address and data quality software and service provider Landscape.  The wide variety of technology, tools and software solutions does not make the choice of products and/or service providers any easier. We can help you with this.

Here, in this document you will learn the most important steps for pragmatic data management, how to improve the quality of your customer addresses step by step.

 

Best Practices Examples – How does the analysis usually start?

At some point, the CEO receives a letter in which the last or first name is misspelled. Then the „inner“ question automatically appears:

How does it look like in my own company and

In the next executive meeting you will then ask in turn: Is it IT/IT, marketing, sales, customer service or database marketing that are responsible?

If you can find a person responsible for data quality management at all, the next question is: „Is our database or are our addresses okay? What are we doing to keep it that way? Are there faulty data? Who looks after them, who corrects them? And what about the other, many data?“

At this point, the person being addressed is often accompanied by politically colored sentences, such as, „Don’t worry! No mailings with „Undeliverable“ were returned in the last action.“ (However, there was no advance directive printed as information for the letter carrier, so no mailings can come back. An advance directive is the text above the address field, „If moved, please forward and return to us with address correction card“). A advance disposition is a premium address service provided by Deutsche Post. In the end, people try often enough to leave the impression that everything is in order.

 

Data quality management: Why are maintained addresses so important?

The recipient of the letter or message does not like to read his name misspelled. A correct and sensibly used personalization in the letter or e-newsletter leads to an increase in the response rate. All analyses are severely impaired by poor address quality and thus the basis for decision-making. Incorrect addresses lead to increased mail returns, unnecessary waste of budgets and lost revenue. Duplicate addresses or cover letters frustrate the recipients („Man, must they have money“).

If, for example, mother and daughter receive letters or catalogs at the same time, but with different offers, this leads to a loss of sales, since they naturally always pick the cheaper offer.

Only with standardized, cleansed and up-to-date addresses can external data be added, which lead to further segmentation or qualification (for example microgeographic or lifestyle data).

 

Data quality management: definition of address and data quality

data is information. Important information. Data is the new oil! This guiding principle has become increasingly accepted in recent years.

data, which of course also includes addresses, is the basis for good dialog marketing, for targeted sales, perfect service, customized products, sophisticated reporting and detailed analyses, determination of key figures … and much more. From the company’s point of view, it’s all about individualization and personalization. The prospective customer or customer wants a „felt closeness, he wants to be understood …“

The more valid these data are, the better these measures work, the better the prospective customer or customer feels in good hands. For most companies, it is all about improving the status quo. High data quality is therefore the long-term goal.

In addition to the data protection components, which we will not go into here, but refer to, companies should also consider the aspect of motivation:

What does mindfulness mean in relation to this topic? What motivates employees to achieve good address and data quality? This is ultimately a management task and a question of attitude and mindset.

All efforts of Data Quality Management (data quality management) have one goal:

to achieve and maintain the best quality of existing data in an efficient way.

And we don’t stop at a one-time maintenance, but try to keep this data always up-to-date with everything that is available to a company or with the help of a prospect and customer.

Wikipedia writes:

„Information quality is the measure for the fulfillment of the „entirety of the requirements for an information or an information product, which refer to its suitability for the fulfilment of given information needs“. 1] Statements about the quality of an information refer for example to it, how exactly this ‚describes‘ the reality or how reliable it is, to what extent it is usable thus as basis for a planning of own acting.

The term data quality (as a measure of data quality) is very close to ‚information quality‘. Since the basis for information is ‚data‘, the ‚data quality‘ affects the quality of the information that is extracted from the corresponding data: No „good“ information from bad data.“

Somewhat further down in this Wikipedia article is still being written:

„Quality criteria for data quality differ from those for information quality; criteria for data quality are according to[7]:

  • correctness: the data must correspond to reality.
  • Consistency: A data set may not have any contradictions within itself or to other data sets.
  • Reliability: The origin of the data must be traceable.
  • Completeness: A data record must contain all necessary attributes.
  • Accuracy: The data must be available with the required accuracy (example: decimal places).
  • Update: All data sets must correspond to the current state of the depicted reality.
  • Non-redundancy: No duplicates may occur within the data records.
  • relevance: The information content of data records must meet the respective information requirements.
  • Uniformity: The information of a data set must be structured uniformly.
  • Unambiguity: Each data record must be unambiguously interpretable.
  • Comprehensibility: The data sets must correspond in their terminology and structure to the ideas of the departments.“

 

Data Governance as a meta-level – Wikipedia writes about it:

„Here the focus is on a single company. Data governance here is a data management concept in terms of the ability of an organization to ensure that high data quality is maintained throughout the data lifecycle and that data controls are implemented to support business objectives.

The key focus areas of data governance include availability, usability, consistency[2], data integrity and data security. This includes establishing processes that ensure effective data management across the enterprise, such as accountability for the adverse effects of poor data quality and ensuring that the data an organization holds can be used by the entire organization.

A data steward is a role that ensures that data governance processes are followed and policies are enforced, and also makes recommendations for improvements to data governance processes. Translated with www.DeepL.com/Translator (free version)

This section could also be part of section 6 „Quality of adresses and data is a task of leadership“. Because it is about responsibility. Only below is it about management responsibility. A data steward or similar corresponds partly to a data protection officer and partly to an operational manager.

We will be taking a more detailed position on this in the coming weeks.

 

Data Quality: Is there a difference between addresses and data

There is not much difference, but we will add a few notes about it.

The address quality refers to the data belonging to the address. These are usually variables, such as salutation, title, first name, last name, street and house number or the P.O. box, postal code and city. Whereby the postal code can be differentiated again according to postal code street and postal code post office box. This also includes variables such as the e-mail address, telephone number, smartphone number, fax number, etc. Because address quality is all about the delivery of the message. Regardless of which communication channel is used to deliver the message.

All other data that does NOT directly belong to the address is considered separately under the name of data quality. This is not entirely free of overlaps and thus contradictions. Nevertheless, we summarize the topic of data quality management at this point once again as follows.

.

    • data is information –> information quality as a superior term
    • data and addresses of a prospective customer and customer –> data quality as brackets
    • Addresses Contact information about a prospective customer and customer –> address quality

 

criteria that cannot be directly assigned to a prospect or customer but are used in the context of transactions. Terms such as Product Information System (PIM), Master Data Management (MDM) or similar terms are used for this purpose.

 

Data Quality: Basic know-how of address quality management

How do you determine address quality within data quality management activities? We have developed a simple method for this. What are the most important measures? Please carry out the following simple checks:

step one – visual inspection

You transfer all existing addresses (customers, prospective customers, raffles, customer service inquiries, etc.) from a contiguous postal code area (preferably one in which you are personally well versed) into an Excel file. A number of e.g. about 5,000 addresses is already sufficient. Before the check begins, insert one or more columns in which comments can be entered for each address.

Then sort the addresses according to the various criteria and, for example, take a closer look at the first 1,000 and the fourth 1,000 addresses on a random basis.

Each of these 1,000 address packets is now examined as follows:

First you sort the addresses by surname and first name independent of the postal code. Take a look at the spelling of the surnames and first names and you will quickly see in which different spellings unique names and first names have been entered: wrong upper/lower case. The first name is in the last name field or vice versa. The company name is in the name field. The company form is missing.

Then check if the salutation matches the first name. Also the title is regularly entered incorrectly in address fields. One time it is next to the first name, the other time it is in its own field, then „Dr.“ is written next to „Doctor“ and „Prof.“ next to „Professor“ and so on. Now sort the addresses by zip code, street, name and first name.

Quickly determine whether person duplicates are contained in the file or whether several family members are entered under the same address. Are these now grandma, mother, daughter? Or is that a coincidence? In the last step, check whether all postal codes have five digits. Is the leading „zero“ missing in the East German addresses (which unfortunately often creeps in when exporting to Excel)? Have foreign addresses possibly crept in? Are they marked accordingly? Now count the number of addresses with errors within the packages. If the error rate is higher than two to three percent, you should immediately take the following steps.

Little excursion into B2B on the topic of data quality management:

Very often the address models of ERP systems, e-commerce or other systems have two or three fields that are intended for the company name. This usually leads to a huge problem: The first part of a company is entered in the first field, the addition to the company signature in the second field, the rest as well as the legal form can then be found in the third field.

If such a company name is created, the following could happen: When creating a new company name, an employee checks whether this company already exists. However, he/she enters the company name in a different way and the duplicate check program will not find it. The supposedly new company is created a second time.

Or the interest/customer himself writes himself a little differently than the name that was previously in the system. In connection with e-commerce, this company would also be created a second time, because here too the duplicate check program usually does not recognize the duplicate. If an employee creates this designation, it is often argued afterwards that „the customer wanted it that way“, so I created it the same way.

intermediate conclusion on B2B data quality management:

Exactly at this simple point there is often a lot of crap in the databases, especially in B2B. In our projects we have often found between 6 and 10 different spellings as duplicates. And this can be avoided through training and rules.

 

step two – address audit

Many address service providers offer a cost-effective address audit. Your addresses are compared with different reference data. As a result, you receive an assessment of how good the entire data is. After this check, you will be able to control the necessary qualification measures in a more targeted manner. You thus rule out the much too expensive watering can principle „everything for everyone“.

 

Tip from the practice: Do not send a file with all addresses for checking. Divide your data into meaningful groups and have them checked separately for quality. For a duplicate check, of course, all addresses must be checked at once.

 

step three – data audit

Here the contents of the variables are analyzed with univariate or simple statistical methods and „anomalies“, incorrect entries or unnecessary values are shown. More about this – further down in the section Basic Data Quality Know-how.

 

Step four – Summary of audits and visual inspection

From the three check steps a summary is created for the management. The detailed analysis includes the identification of weaknesses and good performance. For the weak points, there are recommendations a) for one-time cleanup and b) for ongoing optimization and control.

Ideas for key performance indicators (KPIs), special management tasks, process optimization or IT support round out the picture.

This is the basis for further action, acting and controlling.

 

Step five – decision „do it yourself“ or „let it do“

Before the whole procedure of the cleanup can be started, the question arises: Do it yourself or have it done by the service provider?

For „Do it yourself“ the rule clearly speaks: „Addresses belong in the core competence of every company that does CRM and dialog marketing“. Only with smaller address lists or in the initial phase can it be faster and easier with a service provider.

In the medium term, you should always edit the addresses in the company. Addresses are the capital of every company. A service provider (unless he is a proven specialist for this industry) cannot represent the individuality of a company. This also goes along with the training of the employees. Rules are created, how addresses are to be recorded in the future or how the data qualification is carried out.

Practical tip: International companies should also have the topic of address quality dealt with in the respective country. The head office often has too little knowledge about regional peculiarities and general conditions.

 

Step six – the one-time or initial cleanup

Normalization or standardization: You prepare the adress-data in such a way, that all information, which can be processed, is written into the corresponding fields. Then you check and correct the salutation using a first name table and the correct form of address. These tables are available from various providers, also for many Western and Eastern European countries.

 

Postalic cleanup: With the tables from the post office you can standardize the spelling of the street, the city name and possibly the postal code. For addresses that have not been validated for a longer period of time (six to twelve months), a relocation check is recommended. You can use it to change to the new address accordingly. With a comparison of the data of deceased or insolvent persons and companies, you can clean up your addresses in a further step.

 

Completion: With the correct address a completion or correction of company names is now possible.

doublet cleanup: After you have made all necessary or possible corrections and enhancements, the duplicate matching is useful. You must perform the check for person and family duplicates (Business-to-Consumer) as well as company and contact person (Business-to-Business).

 

Manual correction: The last step is now the manual corrections. This is certainly time-consuming, but depending on the customer’s value, it is mandatory. Unfortunately, the software does not recognize all errors and therefore cannot correct or clean them automatically. These „insecure duplicates or spellings“ are now reviewed record by record by your address quality experts, possibly a search on Google, the imprint or at the residents‘ registration office is carried out and then either confirmed with „correct“ or corrected accordingly.

 

External enrichment: Only now can you enrich your addresses with telephone numbers, industry or micro-geographic or lifestyle data.

 

Create shortcuts: Furthermore, you should create shortcuts of several persons from one family or company. In addition, it is recommended to create group connections or to link parent and subsidiary companies for company addresses.

.

Here again the complete procedure in an overview:

Address and data quality

Fig. 7.24 Example for address and data quality cycle (Source: bdl, 2014)

Tip from practice: Records that have been checked against each other should be marked that the same records will not be processed again next time. Then only the added unsafe problem cases should be checked again.

Now comes the endurance run: ongoing cleanup or sustainable data quality management

All the above-mentioned test steps of the initial or one-time cleanup must of course be performed regularly and repeatedly within the running processes. In companies where a large number of people involved touch the addresses and possibly correct them, ongoing quality management is necessary. The same is true if there are webshops or other Internet sources of data collection (newsletters, etc.) where customers register themselves.

In addition, the customer should be asked for possible changes at each contact, at regular intervals, but at least once a year, or he will receive a letter or e-mail with a personalized country page and the request to please correct the incomplete address. A response incentive for more attention or response is recommended.

In principle, it is a matter of avoiding a) typical errors, b) insufficient data quality, c) unnecessary costs and thus achieving high quality.

Concluding remark on the topic of operative Data Quality Management (DQM) and better address quality:

An initial cleanup can take between three and nine months, depending on the amount of addresses and the quality/condition of the necessary addresses. Of course, the costs vary greatly. This depends, for example, on the software used and on how much manual post-processing is required and how often the addresses have to be checked during ongoing business. Companies that send a mailing to the majority of their customers every month have different processes than companies that send only four mailings a year to a selected target group. It is important to provide a sufficiently large budget for initial external support, software, validation and manual maintenance.

Don’t be surprised if your bank checks this aspect „How good are your addresses?“ the next time you ask for credit.

Perfect address management is the necessary basis for your future success and therefore one of the most important tasks in every company – regardless of whether you are dealing with 500 or five million addresses. These targeted cleansing and quality measures have usually paid off after six – at the latest after twelve months.

 

Data Quality:

basic know-how of data quality

As briefly described above, univariate or simple statistical methods are used here to analyze the contents of the variables and show „anomalies“, incorrect entries or unnecessary values.

In the first part of the audit we define the 20 or 30 most important variables that are particularly important for the company. It usually does not make sense to look at all the variables that exist in the customer data. That would be a Herculean task.

 

Tip from the practice: > Even fields, which normally have no freedom of input due to checkboxes, should be analyzed. Why? Often, multiple migrations from old systems have resulted in field contents that no longer comply with the current rules, but are still there – and are usually incorrect or outdated.

What are important data that can be directly assigned to a customer: In the case of persons, this is e.g. the age or date of birth, nobility or scientific title, profession, position, form of address, gender, segment codes and many more.

For companies, this would be e.g. the legal form, country code, codes for state, language, currency, customer and segment codes, which sales area is assigned to the company, which employee of the own company is assigned to this company or the new contact person, is it a key account customer, etc.

Is the Homepage field filled? Can this be deduced from the e-mail address? Are the branch codes all correctly maintained? Is there a reference to the sister, subsidiary or parent company? and many more possible.

Now you put these variables into a flatfile and run standard analysis on it.

This is on the one hand a counting according to „Frequency of values“. In alphanumeric fields you can now find the most wonderful ideas how to write actually identical field contents in the most different ways. Numeric fields often contain values, which normally should not be written here.

For numeric field types, a mean value calculation makes sense. In this case, outlier values can be recognized quickly. If there are incorrect programming or wrong selection list contents or data transfer from the test system which influence the quality accordingly.

The nice thing about this analysis is: You can see where the rules of capture are not followed. You can also see which processes are not yet round.

 

Our motto: Show me your data and I’ll tell you who you are and how well you measure!

Data Quality Management (DQM): Address and data quality is a management or leadership task

Yes, this is a very important task for the management. It is not only about IT systems. If employees do not know why and for what they are doing it or if there are new incentives to maintain data in the system, it will not work. Only if the employees know, and then exercise care, will you be successful with your company.

As already mentioned, it is first of all about someone taking responsibility for the topic in the company. Someone who shows himself to be a leader and caretaker on this topic, someone who constantly ensures that the quality remains high.

On the other hand, lasting quality can only be controlled by KPIs. These are thus also integrated in target agreements. In this way, the management and leadership team can see whether the company is heading in the right direction, whether quality is gradually improving and whether reporting is therefore automatically improved.

An explanation and motivation of why data and address quality is so important is of course also part of it. Why should a sales force employee enter the data treasures stored in his head into a CRM. What does he get out of it? What do others get out of it? Why is he able to control himself better through this? How can his work be made easier by automating tasks based on good data? Which tasks should be done together for an integrated data quality management system?

 

This topic of data quality management is also a management task because …

Address and data quality is not a cost factor but a value-added factor!

Data maintenance and data quality must become a corporate culture. It requires an entrepreneurial attitude of every employee that certain selected data is extremely important for the company. No ifs, ands or buts.

 

Data Quality: Key figures in the area of data and address quality

key figures are for example „number of addresses completely filled“, „date of last confirmation“, „date of last correction“, „number of mail returns“, „number of addresses currently not advertisable“ etc. All key figures are – arranged according to segments – interesting, since the good customers are rather frequently contacted and thus a regular confirmation or correction takes place. For less good customers (low customer value), this means that a different effort has to be made. (See also „Evaluation of addresses in the subchapter „CRM Cockpit“)

We define a few simple, important KPIs or key figures to get you started.

  • Number of postal returns or mailing returns
  • Number of soft bounces and number of hard bounces
  • Number of parcel returns
  • For each important field there are the criteria
    • „Fill level in %“.
    • Content quality is „good“, „medium“ or „bad

Now you can extend or refine the KPIs for each country, region or target group. But first and foremost, the youngest prospects, the active customers as well as the recently passive customers (who have not ordered for some time) need to be cleaned up and kept at a high level.

The company can make these KPIs available to users for monitoring in reporting, in a Business Intelligence (BI) application.

Data Quality – Outlook:

data are the new oil. You hear this sentence more and more often. But first, before the oil well can be tapped, a high quality can be achieved or AI analyses can be performed, the basis – the database – must be created. Many people talk about Big Data. Yes, the mass of data is increasing all the time. But Big Data is not the problem at the beginning.

The actuality of the most important data is the challenge!

First, when the company dedicates itself to the many possible external enrichments and a lot of own data (in the webshop, logfile of the website, social media …), then Big Data is called for. But even here, one should not look at the (data) mountain as a whole and try to climb it once.

The company should focus on the data that is likely to bring the highest added value. „A lot helps a lot“, has unfortunately always been and still is the worst advisor on the way to higher address quality.

Where the most important data is located is basically irrelevant. The fewer sources the better. We know from the Blissfully study (Blissfully study) that unfortunately there are far too many Cloud or Saas applications. The data is widely scattered or hidden in silos, hardly anyone has an overview.

Other in CRM, ERP, e-commerce or on a Data Management Platform (DMP) – on-premise or in the cloud. The main thing is to have easy access or to be able to simply restore the extracted and cleaned data. The data should be refined. The employees should be involved in the refinement through targets.

The fashionable topic of digital transformation also only works if the data quality is sustainable and good (perfect quality). For the transformation you need reliability, should avoid bad data or bad data quality, continuously optimize the most important data.

These tasks often take 6-9 months, but they are worth it. After a short time, optimal data quality exists and a ROI for this investment is guaranteed and quickly achieved. We have proven this in each of the projects so far. And on the other hand, you save budget on mailings, reporting and decisions become better, you avoid costs and have significantly more chances of generating revenue.

data and address quality is a critical success factor. Accuracy is a must. If only because of the DSGVO. Today this is already the case. And in the future more than ever. The DSGVO alone obliges every company to be absolutely up-to-date.

As already written, automation in marketing, sales and service or AI analysis and prognoses only work if the data is clean. And ultimately, management is responsible for assigning tasks, taking over operational tasks and controlling them.

 

Conclusion: Address quality is not a cost factor, but a value-added factor!

Further information, further literature, who else writes about this exciting topic besides us:

 

Note: This is a machine translation. It is neither 100% complete nor 100% correct. We can therefore not guarantee the result.

Do you need help or do you still have open questions?
We are happy to put our know-how at your disposal.

[siteorigin_widget class=”SiteOrigin_Widget_Button_Widget”][/siteorigin_widget]

Beiträge zum Thema Software-Anbieter

Adress-/Datenqualität

General information on address/data quality

Intro, why is this topic relevant, why is it strategically so important? For all companies seeking direct contact with their customers, the customer database is

Our newsletter is free, but not for nothing..

…you will receive exclusive benefits such as analyses and comments on software products,
Legal and Marketing Technology, and much more…