NEVER EVER talk back to the Flight Attendant October 9, 2014Posted by Marko Sillanpää in Content Management.
add a comment
I’ve basically created the rule that beyond the good morning or evening, I don’t talk to or make eye contact with a flight attendant if I’m in coach, ESPECIALLY if they look to be in a bad mood. Last night’s flight home pounded that story home.
Tags: #EMCElect, Consumerization, ECM, EMC, InfoArchive, sync, sync & share, Syncplicity
1 comment so far
Sync and share addresses a simple proposition. Everyone wants to access all of their content from any device wherever they are.
While they need to be solved, focusing on conceptually difficult problems often keeps us from dealing with basic needs. We too often “cheapen” those needs by assuming everyone has already solved for that or that there is no real business opportunity.
Complexity still exists in pervasive problems but comes into this challenge mostly from scale. Early on as the sync and share market developed it was surprising how difficult it was for some in the ECM ranks to accept the fact that users (owners of both content AND budget) just wanted access their files and did not see the need to add metadata, workflow and burdensome controls.
It took new innovative outsiders building companies that collected millions of users to convince them. Some still don’t get it. What is worse, some analysts still don’t get it either.
Now just because users don’t want controls doesn’t mean the businesses they work for do not need them. This is a natural progression but the difference is that the market reversed the flow of requirements and funding. In past decades, the controllers of technology drove requirements and consumers had no option but to accept them.
Consumerization of IT is what we call this change but I have decided I no longer like this term. At its core, the phrase is a passive aggressive slight to those now in control of the purse strings. Whether they admit it or not, old school propeller heads look down on consumer technology as something “less refined.” You hear them use terms like “not robust,” “immature” and “inelegant.” What they are really saying is “I could have done this better myself.”
Which begs the question – so why didn’t you?
This elitism, despite the success of all things Apple, continues to slow innovation and adoption in the world of enterprise technology.
It is time to acknowledge that it is not consumerization itself that is the origin of the shift but rather it is an outcome. The overwhelming pervasiveness of the problems being addressed demands that for adoption to occur at all, the end point rather than the data persistence layer and governance controls determine any solution’s success.
For those of us in the business of creating these solutions, a focus on pervasiveness should trump complexity in our prioritization. Providing incremental improvement at scale creates exponentially more value simply because the solutions affect so many more people.
In a later post I will describe the pervasive problem I am currently focusing that EMC InfoArchive addresses. One that is not at all a product of changing user experiences but affects every IT organization on the planet.
Rebooting Enterprise Content Management July 8, 2014Posted by Marko Sillanpää in ECM.
Tags: ECM, ECM 2.0
1 comment so far
For the last year I’ve been a lurker in the ECM landscape. Mainly I’ve not been sure what to say. I’ve written a few post and responded to a few others. But I haven’t felt ready to take a position. Recently I’ve had a few gentle pushes that have me thinking it’s time to say something. To talk about what has changed in my approach.
My tipping point was a visit to the Crown Partners website. From Momentum’s past, people will remember their Documentum Viper and two Documentum Hummers. Today Documentum is nowhere to be found in their partner list. Crown has moved on, successfully, to become a web experience company. It leaves me to ask, is the ECM problem … changing? (more…)
Digital Archiving and Hemingway’s Hamburger Recipe June 30, 2014Posted by Marko Sillanpää in Hyland, OnBase, Random Thoughts, Records Management.
1 comment so far
Being in this space for too many years to count, I’ve often been pleased to see that there are altruistic ECM projects going on. Something as Lee would put it, “wasn’t putting toilet seats on the internet.” I really enjoy the work I can do with non-profits and it just puts an extra pep-in-your-step when you know that you’re not increasing profits or cutting costs. Then there are those projects that are looking for the technology to make data accessible to the public.
When Process Replaces Product (or Why I Hate My HoA) June 10, 2014Posted by Lee Dallas in Content Management.
Tags: BPM, ECM, Process
add a comment
I dislike my home owners association.
I am not alone. (more…)
EMC World 2014 May 5, 2014Posted by Lee Dallas in Documentum, ECM, EMC, xCP2.
Tags: #EMCElect, #MMTM14, Documentum, ECM, EMC, EMC World, Momentum 2014
add a comment
Began my week at EMC World 2014 and Momentum today.
Even before the sessions I had a great conversation over breakfast with a colleague from another division about new things our respective technologies could do together. That is my favorite part of this conference. Free flow of ideas with wicked smart people.
First thoughts on the week ahead.
xCP 2.1 : I have precious little time to attend sessions because of my day job but I made time this morning to hear from xCP product management on the new features of 2.1 and a little peek into the future. I hope to write more on 2.1 later this week but the key message is the maturity of xCP 2.1 into a terrific platform to create beautiful, flexible and powerful business applications that can go far beyond run of the mill BPM or content management. Admittedly there were many things in 2.0 that we all wish had been there. A few points I will make here
- Process debugging and preview solve my biggest complaints.
- Seamless editing finally lets me build content management features on par with the rest of the portfolio
- Session variables, page and type fragments allow me to be as creative as I want to be building an easy to use and understand user experience.
InfoArchive : For customers and partners at EMC World, I suggest you spend some time to understand InfoArchive. As I have written many times before, long time Documentum and IIG practitioners need to make sure that you do not carry too much of your conceptual baggage with you as you look at this opportunity.
We have been in the business of archiving data for many years but this allows us to expand the value of our expertise to deliver a purpose built structured and unstructured archiving platform. Even though InfoArchive has a suite of components that may not all be necessary, the evolution of the business model to consumption base pricing simplifies the transaction as well.
Just like you don’t want to confuse complex ECM use cases with InfoArchive, the other risk is to oversimplify and think this is simply an object storage play. Unfortunately as an industry we overload terms and loose very important distinctions. InfoArchive is a layer above the storage conversation and delivers a business user application layer to search and manage archived records.
Tonight I start my job on the show flow which is the main reason for being here. Getting a chance to spend time with customer and partners talking about their challenges and ideas for how we can help meet them. If you are here in Vegas, please stop by the booth and see all the latest and greatest from IIG this year at EMC World.
Standard disclaimer and friendly reminder : I am an employee of EMC working in IIG Partner Alliances. Everything written here is my own opinion.
Deliberate Disruption April 17, 2014Posted by Lee Dallas in Content Management, ECM, Technology.
Tags: disruption, ECM, innovation
1 comment so far
Last weekend Box CEO Aaron Levie tweeted
“Disruption is the art of identifying which parts of the past are no longer relevant to the future, and exploiting that delta at all costs.”
To which I responded :
“deliberate disruption is extraordinarily rare. So rare that I think you can capitalize on it but never plan for it
I’ve been asking myself this question since. Can disruption ever really be deliberate?
The question conjures up images of team meetings where Dilbert’s pointed haired boss declares, “nobody leaves this room until we innovate!”
It occurs to me that in order to be deliberate, disruption must be an objective not just an outcome. This is a mistake.
Truly disruptive technology sets out to solve a problem first. Disruption is a possible but not guaranteed outcome of innovation introduced into a landscape. That landscape is made up of evolving technology, existing competition, and fluid user expectations all of which can be exploited or encountered as obstacles depending on conditions.
The disruption is a function of the problems the competition face in response to what you are doing. Competitors can easily fall into the trap of thinking that imitating the turmoil with existing portfolios rather than finding new ways to solve problems is the same thing. It isn’t.
Genuine disruption solves new problems in a landscape, solves old problems in new ways and/or significantly alters cost, value and accessibility to those solutions. It is in the areas of cost and accessibility where we have the ability to interrogate the landscape and potentially predict the degree of disruption introduced. This is that part that can be deliberate.
Cloud and mobile in recent years have provided a method for disruption by making possible the migration of traditionally on/prem problem sets to off/prem. Reseting cost models for both producers and consumers of services and forcing the redefinition of well established roles and funding models.
When those services are not re-imagined in the cloud context and simply ported from old delivery models, there is plenty of turmoil for vendors but they are most likely victims – not instigators of the disruption.
The challenge for buyers and vendors alike is to understand what combination of forces make up the disruptors and which companies or products are merely caught in the vortex. It is not always easy to tell. Even new businesses can get caught up into a pattern of generating turmoil and lose sight of the real objective.
Solve problems for real people.
That is what we must do at all cost – and occasionally it will be disruptive.
Tom Rouse on “ECM and Cold Pizza In the Fridge” February 25, 2014Posted by Marko Sillanpää in Documentum, Content Management, ECM.
Tags: ECM, Documentum
add a comment
It’s crazy how busy we get sometimes, so it’s fun to be able to catch up with people that you haven’t spoken with for a while. When I heard from Tom Rouse, I took the time to catch up. I’ve known Tom for over 15 years. We were both consultants at Documentum at the time. He’s one of those people that enjoys thinking about ECM and making it relate-able. When I heard his document cold pizza analogy, I thought this is something I should share. So I asked if he would write it down so I could share it.
Here’s what Tom said:
Cold pizza is generally a staple of every refrigerator (fridge). It gets tossed in and sometimes forgotten. But there it is…Still tasty and waiting to fulfill its purpose as a snack. Most Enterprise Content Management (ECM) systems today are serving as “refrigerators” for the documents and content in them
Please stay with me on the metaphor. Most clients use the carefully constructed models and user interfaces when they need a “snack”. The number one reason most clients use the ECM system is to seek out information when they are hungry for it…They just go looking in the fridge for a slice of content!
ECM Trends 2014 – We Don’t Need Big Content January 19, 2014Posted by Lee Dallas in cloud, Content Management, ECM, Technology.
Tags: 2014 Predictions, Content Management, ECM
1 comment so far
A friendly reminder that all of the opinions expressed here are completely my own and not those of my employer’s.
I am late this year putting together my thoughts around trends for 2014 in ECM. To be frank many of the trends in ECM seem obvious with much already having been written about them.
- Everyone is moving to cloud and this no longer trend worthy news .
- There will be a few acquisitions, especially among the mid-tier players to round out capture, workflow and mobile capabilities.
- IPO’s of a few key players will be frequently discussed but deferred until 2015.
- DropBox will relaunch it’s business offering AGAIN. Look for them to make acquisitions of overlapping tools to gain this foothold.
I struggled to find something more substantive to cover until I found myself in a lively discussion on the topic of “Big Content.”
Commercialization of value extraction from the enormous amounts of unstructured data being generated today is the next major focus for advancement in the ECM industry. Going beyond improving transactional throughput and accuracy and into understanding. It may be ironic since I am one of the “Big Men On Content” but I do not like the term “Big Content”. This term perpetuates outdated stereotypes and division at a time when technologies should be coming together
To understand what is meant by this term I did what anyone else would do. I went to bigcontent.com. Surely the genius that had the foresight to grab the URL could tell me if it really is separate from big data. I was disappointed. A marketing firm jumped on the term and is using it in a completely different context. Good for them but it is perhaps a missed opportunity from an ECM perspective.
Big Content as it relates to ECM seems to be the content management industry’s attempt to ride the coattails of Big Data marketing. A never ending quest to be appreciated as much as the more popular sibling, structured data.
So what do we mean by Big Content. Is it a subset, a superset or something altogether different from what we are now calling Big Data? EMC’s Dave Dietrich wrote this piece on Big Data misconceptions and takes the position that unstructured data that rises to Big dimensions is a subset.
To be “big” in this context Dietrich argues the data in question must have great volume yes but also must have both variety and velocity. Certainly some unstructured data has these characteristics. He goes so far as to say most Big Data problems are grounded in the unstructured citing last year’s IDC’s Digital Universe study.
Gartner’s Darin Stewart seems to agree that Big Content is a subset of big data. He goes on to posit that slow uptake of interest in the unstructured aspect is because of a lack of “comfort” in dealing with documents as opposed to databases on the part of IT. He touches on what I feel is the crux of the issue but I don’t think it has anything at all to do with the comfort itself. I think it is the utter lack of an integrated tooling approach across the industry.
All silos begin as words.
If you make it a separate category you may one day have tools that let you do meaningful things but without a common analytical approach it will perpetuate the integration burden the structured and unstructured worlds deal with today. Deriving value from structured data will always be easier and as separate solutions structured data applications will continue to maintain the attention of the buyers.
The very things long term ECM proponents hope to achieve by trumpeting Big Content will continue the technological isolation and lack of innovation that the industry has struggled with for a decade. What is needed is a coordinated drive to raise the expectations of the emerging structured analytical tools to demand search, content analytics, sentiment, etc. Some offerings, particularly those tailored to social media analytics began this work but we must continue to push beyond 140 characters to more valuable and information rich content.
We do not need a category for Big Content tooling, marketing and expertise. We already have one. It is called Big Data. And it is very nice:
Investment in this is happening whether you call it Big Content or not. IBM’s billion dollar investment in Watson is the best example. From a user perspective, Watson does not make a distinction between structured and unstructured sources as part of the question when presented. Likewise as we begin to think about other analytical frameworks from a user’s point of view, the difference in the structure of the data sources should make less difference over time and disappear altogether eventually.
One might ask, isn’t this just the same content and semantic analytics that we have been talking about for years. The answer is “sort-of.” You will be hard pressed to find any of the lofty promises of those initiatives fulfilled. It is my contention that this tooling needs to scale and be formally folded in to the analytical tool set of big data. The correlated structured data provides context for the extracted unstructured. At some level this is happening but as an industry I think we derail this motion when we attempt to create differentiation in categories.
This convergence of structured and unstructured analysis will be hampered if we spend overt mental and marketing energy today perpetuating a separation of the disciplines simply to defend the value of our current expertise.
The trend has begun. We need to help it along or get out of the way.