Data Requirements Modeling
If logical and physical data design solutions were
documented the way data requirements are currently documented, without the
benefit of the graphical representation of a data model, the
cumbersomeness of the practices would make it harder and longer to design
and verify, and overall a lot more expensive.
Yet, the data modeling of data requirements is not
done independently nowadays, if ever, of other types of data modeling.
Trying to include it in conceptual or logical modeling forces the
combination of designing a solution at the same time or even before the
problem space has been fully defined. The systematic reconciliation the
data design with the data requirements, element by element, is a manual
exercise that is most often not done and will cause the physical data
design to go through cycles of readjustments.
Why aren't we building data models that
graphically represent and specifically address data requirements? Why
can't we automatically reconcile a data model representing the results of
the analysis of data requirements with the LDM that represents the
proposed logical data design solution? Because the data modeling tools we
have now do not facilitate these activities.
François will share the evolution of his thinking
on DRM, what he realized and what he proposes. He will share how he sees
it being planned and done, and by whom, what changes the modeling tools
must incorporate to allow the tool users to benefit fully of the DRM
activities, the characteristics he sees in the central model object of
DRM: the Logical View, and how different this model object from the
François will spend some time in showing how,
using specific workarounds, one may be able to create data models that
look like DRMs in two different modeling tools.
François Cartier has more than forty years of diversified experience in
Information Technology in a wide variety of commercial sectors, including
telecommunications, transportation, manufacturing, wholesale, government
agencies, insurance, and financial institutions. He has designed systems
marrying relational with object oriented technologies, built and
contributed to corporate data models, designed operational and decision
support databases under a variety of DBMS’s.
managed data analysis, system development, application support and IT
change control teams. He has been using various modeling tools in the last
25 years. He has given classes at Golden Gate University, and made
technical and management level presentations at various forums in the USA
He is a
DAMA SF chapter member since 1985, a past president and the treasurer for
the last 12 years. He has been working for e-Modelers since 2002 on
various consulting and teaching assignments with clients.
Implementing a Data-Centric Strategy &
Roadmap – Focus on what Really Matters
Data is the lifeblood of
just about every organization and functional area today. As businesses
struggle to come to grips with the data tsunami, it is even more critical
to focus on data as an asset that directly supports business imperatives
as other organizational assets do. Organizations across most industries
attempt to address data opportunities (e.g. Big Data) and data challenges
(e.g. data quality) to enhance business unit performance. Unfortunately
however, the results of these efforts frequently fall far below
expectations due to haphazard approaches. Overall, poor organizational
data management (OM) capabilities are the root cause of many of these
failures. This workshop will cover three lessons as illustrated in
examples, which will help you to establish realistic OM plans and
expectations, and help demonstrate the value of such actions to both
internal and external decision makers.
Among others, you'll walk away with three takeaways:
1. That organizational thinking must change: Value-added data management
practices must be considered and included as a vital part of your business
2. Walk before you run
with data focused initiatives: Understand and implement necessary data
management prerequisites as a foundation, then build upon that foundation.
3. That there are no
silver bullets: Tools alone are not the answer. Specifying business
requirements, business practices and data governance are almost always
Aiken, Founding Director, Data Blueprint
Peter Aiken, Ph.D., is widely acclaimed as one of the top ten data
management authorities worldwide. As a practicing data consultant, author
and researcher, he has been actively performing in and studying data
management for more than 30 years. Throughout his career, he has held
leadership positions and consulted with more than 50 organizations in 20
countries across numerous industries, including defense, banking,
healthcare, telecommunications and manufacturing.
He is a highly sought-after keynote speaker and author of multiple
publications, including his latest book “Monetizing Data Management”.
Peter is the Founding Director of Data Blueprint, a data management
consulting firm that puts organizations on the right path to leverage data
for competitive advantage and operational efficiency.
He is also past President of the International Data Management Association
Lewis Broome, CEO, Data BlueprintAn innovative and
practiced thought-leader in data management, Lewis Broome has more than 20
years of experience successfully designing, managing, implementing and
leading global data management and information technology solutions. His
successful track record is marked by strong leadership coupled with a
passion for driving data and technology solutions from a clear
As an executive in the global financial industry,
Lewis led the development of globally integrated data solutions for two of
the largest banks in the world. He designed and delivered data solutions
(conceptual, logical and physical) and was able to drive standards and
deliver timely, cost-effective solutions that were aligned to business
his current role as CEO, Lewis, in partnership with Peter Aiken, Ph.D.,
has developed a tier-1 consulting organization that effectively combines
data management, management consulting and technology into a unique
professional services offering.
Title: Hadoop Data Lake Controversy: Can
You Have Your Lake And Use It Too?
Hadoop provides an ideal platform for storing
many types of data that business users - data engineers, data scientists,
data analysts, and business analysts - can leverage for data science and
analytics. But Hadoop is a file system that lacks the automation to
catalog what data it contains, and has no native way for users to find and
understand the data they need for their data science and analytics
projects. The lack of automation is overlooked when a team conducts a
pilot since the data set is known; however, it becomes debilitating as
projects grow beyond a proof point or two. The end result is data anarchy
where the business has to scavenge for data and hoard what it can find,
while IT is desperately trying to manage the data to meet the needs of the
Using data in Hadoop is like scavenging at a
flea market. It is impossible to know upfront what data is there and it
would take too much time to browse through the entire market. In the case
of Hadoop, it is not practical to browse through all the files in the
cluster to find the right ones to wrangle or visualize.
The opposite of shopping at a flea market is
Amazon.com. From a user perspective, it is easy to search and find the
right product very quickly. A user doesn’t need to write code or browse
through endless list of items. Amazon.com provides a catalog of products
with detailed information that anyone can use.
Waterline Data solves the challenges of
finding, understanding, and governing data in Hadoop. Waterline Data is
like Amazon.com for Hadoop data. Waterline helps anyone find and
understand data in Hadoop without writing code or wasting time browsing
through unintelligible files. In addition to providing the
self-service experience to find and understand the right data, Waterline
Data also automates building and maintaining a data inventory, securely
provisions data to users, and enables data governance throughout.
Founder and CEO, Waterline Data
Alex created Waterline Data to accelerate the adoption of Big Data and
data driven decision-making at enterprises.
Prior to Waterline Data, Alex served as general manager of Informatica’s
Data Quality Business Unit, driving marketing, product management and R&D.
Also for Informatica, Alex managed a team of 400 engineers and
product managers as SVP of R&D for Core Technology, developing
Informatica’s platform and data integration technology.
Alex joined Informatica from IBM, where he was an IBM Distinguished
Engineer for the Information Integration team. IBM acquired Alex's second
startup, Exeros that specialized in enterprise data discovery.
Previously, Alex was co-founder, CTO and VP of Engineering at Acta
Technology (acquired by Business Objects and now marketed as SAP Business
Objects Data Services).
Prior to founding Acta, Alex managed development of Replication Server at
Sybase and worked on Sybase’s strategy for enterprise application
integration (EAI). Earlier, he developed the database kernel for Amdahl’s
Design Automation group.
Alex holds a B.S. in Computer Science from Columbia University School of
Engineering and a M.S. in Computer Science from Stanford University.