$BlogRSDUrl$>
Blog for Knowledge Sharing
A Great tool for Knowledge Sharing ...The WWW is no longer READ-ONLY!
Sunday, May 30, 2004
Boosting staff performance with coaching - May 31, 2004
Coaching is not a new practice, she adds. 'Managers have been informally coaching their people forever. They just did not call it coaching.
Saturday, May 29, 2004
Computer Technology Review: Designing a knowledge discovery system, Part 2: now that we have categorized, let's � classify! - Internet
Really, all people want to do when they use a search engine, portal, or even a full-blown knowledge management system, is answer a question. They want all information relevant to their question so they can formulate an answer. To do this, knowledge management systems must shift from a "retrieval-" to a "discovery-"based orientation. Next generation knowledge discovery systems will introduce users to a new way of searching information assets by better complementing the user's own cognitive approaches to finding information. These systems must simultaneously manage vast and continually changing stores of information, as well as the idiosyncratic nature of the user.
To handle this two-step requirement, knowledge management system design should be split into two consecutive phases. The first phase should focus on the organization and its need for a maintainable, reliable and universally understandable information repository. The focus in phase one is internal. This phase is dependent upon the proper use of ontologies and taxonomies, as described in Part One of ("A Roadmap to Proper Taxonomy Design," Computer Technology Review, July 2003).
The second design phase (and the focus of this article) is the user-centric, externally focused dynamic classification phase--which, when layered on top of a solid taxonomically based informational foundation, constitutes a powerful, scalable and flexible system capable of complex problem-solving support. The beauty of dynamic classification is this flexibility and ability to adjust in the face of the huge and constantly changing information assets available today.
What's required is a set of tools that help individuals extract small details or serendipitously discover data relationships within the information foundation, in ways that make unique sense to them personally.
What is a Classification?
A classification can be visualized as a tree representation of what is actually a multi-dimensional matrix. "Dynamic" classification is the ability to cross and combine these dimensions--essentially slicing and dicing information as desired and in real-time--to place information into the perspective most meaningful to the user within a unique, time-specific problem-solving context.
It is this specific ability, the user-definable slicing and dicing of data, which supports knowledge discovery versus merely information retrieval.
These information dimensions or trees may be shifted or reversed; dramatically affecting the resulting classification even though the information latched to the categories within a dimension remains unchanged. It is easy to see that these trees are permutable. In the two classifications in Figure 1, the dimensions used are the same. However, when the dimension order shifts, an entirely different perspective is generated.
In Classification 1, we can see diseases within an African context. In Classification 2, we can see the epidemiology of Alzheimer across different geographical contexts.
The ability to shift dimension order is a true benefit of dynamic classification. Individuals need to understand many variables in order to make a good decision--particularly in an urgent situation. Moreover, each individual will go about this process in a different way. For example, if a terrorist attack was imminent the local police, FBI and medical personnel would all want essentially the same information but from their own different perspectives.
Dynamic classifications generally occur in identifiable patterns. These are geography/ topic, horizontal/vertical and vertical/vertical. This is useful to keep in mind as you begin to design your dynamic classification tools and identify the dimensions you will offer you users. Geography is the most commonly used dimension because it is an analytical element of so many decision-making processes. Terrorism in the Philippines or criminal law in Texas or domestic sales, for example, would all involve a geographic tree. An example of horizontal/vertical pattern would be the petroleum business or anti-money laundering regulations. The horizontal tree (business) includes broad categories such as marketing, research, health and safety, etc. By crossing a horizontal category with a vertical dimension such as Petroleum, which may contain such categories as crude oil or solid waste, you would derive categories populated by documents highly specific to that type of business. The other major structure, vertical/vertical, really displays the power of dynamic classification. An example would be "MeSH Proteins" and "MeSH Diseases." In this case you would see all categories containing documents with information matching these two trees.
You can see how quickly this process becomes complex. There are thousands of diseases, and if you cross them with the dimension of proteins alone, you could have millions of possible combinations. If you add a third dimension, chemical compounds, you move quickly into the billions. The virtual space containing your multi-dimensional operation is huge, even when you use only a small part of it.
This highlights another design consideration. If you allow your users to cross too many large dimensions, the number of relevant documents resulting is likely to be low and rather unsatisfying for the user. While this sounds counter-intuitive, it happens because very few documents will colocate a reference to disease A, protein B and chemical compound C. So, while the virtual space necessitated is huge, it will be populated by a reversely dismal amount of documents. You need to design in equilibrium among the number of dimensions you use and the number of documents you can actually target to populate the resulting classification. To do this successfully you need to understand your users. You will need to determine what type of information is most useful, which intersections of information will be most valuable, and then ensure that these individual dimensions are represented in your indexing and taxonomy development. For example, expert users are likely to want extremely specific information throughout their classification. General users will want to see a broader selection of data at the top and then drill down as knowledge increases. In some cases, you may need to design two systems to address each user group's need.
Populating Classifications
Once a number of template classifications have been designed, the next step is to benchmark the classifications and analyze how they become populated. Visualize the population process as a waterfall: imagine documents dropped at the top of the cliff and cascading down into various streams. Documents flow through the classification design and go as deep as they can to find their "best" folder(s). A document can pass through a node and explore further if it satisfies the rules established for that folder. At a minimum the document's tags must match the folder's name. A rule can, of course, be more sophisticated and combine taxonomic categories with semi-structured information otherwise extracted from the documents. Metadata included in forms, for example, or clinical trial reports, may be an extremely important information source. As well, an individual's personal database of notes, thoughts and clipped articles may be significant to their particular line of research. These little "treasure houses" must also be indexed and accessible through the folder's rules.
In Figure 2, the fact that a given document has all the tags required by the path (date::geography::business:: type) is not enough. To ensure maximum quality, you also want to make sure that there is enough "mutual information" between the occurrences of these tags--in short, that these tags are consistently found together in the document. For example, if you were looking for documents referencing red SUVs, you would not want to see a document dealing with blood or pigmentation. You would only want to see documents in which the words "red" and "SUVs" occurred in close proximity, indicating a relationship. The ability to latch concepts and accurately identify them as having additional meaning based on their proximity is critical to automating classification.
Once a template classification has been populated, you should check the "behavior" of the classification against your collection of documents to verify results in terms of both accuracy and efficiency. The following are a few tips for verifying the accuracy of your classifications.
Quality Controls Folder Population
Folders should be scanned for population quantity. Some folders will be overpopulated and some will be quite thin. Clearly the documents should flow as deeply as possible into the classification. If the folders appear overpopulated, check for a bottleneck. If this occurs, the folders will need a bit more "room" so documents can flow appropriately into "children" folders. Use your taxonomy to break the folder into more categories.
On the other hand, you may have folders containing only one document. That's not good either. In these cases you may have one folder containing one document that you open only to see another subfolder containing one document and so on. To eliminate these "strings," you should take the "end" folder and shrink all the intermediate levels leading to it. The intermediate levels should be collapsed so that you see the folder containing the end document in the sub-category of the first level. If the large area of your classification is unpopulated, you may need to release some constraints. Perhaps the combination of dimensions is too rigid, or you don't have documents dealing with the right combination of information, or you should incorporate a different type of source material.
Interrupted Bell Curve
In a proper classification you would expect to see the natural distribution of your documents represented as a bell curve. You would see a few folders at the top, more in the middle and fewer again at the end--let's say the fourth and fifth levels of classification. If this is not the case, and you see larger document quantities at the end levels, then you need to add additional levels to increase specificity and derive a more natural bell curve
Folder Size
The average size of your folders must be a reasonable number, preferably not in the single or triple digits but somewhere within that range.
Quality Tests Needle Test
One quick way to verify the quality of your classification is to find a bit of relevant information that occurs only once among all of your documents. Check to make sure that you can actually find that one, unique combination of data.
Full Discovery Test
Another quick test is to open each of the documents in a particular category. Verify that the information cited is not too far apart to represent a relationship. Also check to make sure that the information is correctly categorized.
While leaps made recently in search technology are astounding, the mind is still the better tool. Our innate ability to balance multiple variables and shift variable priorities when emergencies arise is a skill we should compliment, not duplicate, with computing power. Next generation knowledge discovery systems are beginning to exploit the power of the user, through capabilities like dynamic classification. By layering dynamic classification on any properly designed, taxonomically-aligned information repository, corporations can immediately empower their knowledge workers to begin working more precisely, efficiently and with significantly more satisfying results.
Figure 1--Geography and diseases
Classification 1
Africa
Alzheimer
Anthrax
Classification 2
Alzheimer
Africa
America
Figure 2
Example:
Summer (metadata: date)
Domestic Sales (taxonomic: geography and business
Convertibles (metadata: type)
www.in.convera.com
Dr. Claude Vogel is chief technology officer at Convera (Vienna, VA)
Computer Technology Review: A roadmap for proper taxonomy design: Part 1 of 2 - Internet
Burgeoning information quantities, regulatory compliance requirements and competitive drivers for speedier, more accurate data analysis continue to spur the development of many new, innovative information management technologies. Yet, pivotal to the success of these technologies is a rather old technology--arguably defined by Aristotle and later refined by Linnaeus--called a taxonomy.
A taxonomy is a hierarchical system describing the descending relationships between species and genera. Species derive from a common genus and, within a taxonomy, are hierarchically represented according to their essential characteristics and differences. For example, a thoroughbred is a type of a horse, which is an equid, which is a mammal and so on. Another useful term to know when defining a taxonomy my, is ontology. An ontology is a foundation of categories representing a particular organization's view of its world. It also reflects the organization's commonly used and trusted breakdown of those categories. For example, the logical breakdown that a news broadcast organization might use for its news items: World, Sports, Politics, etc., is ontological.
Taxonomies, used in conjunction with company ontologies, have proven to be a highly efficient structure for organizing structured and unstructured content. Taxonomies are therefore highly desirable, indeed a critical component of any sizable information management structure, be it a content management system, Intranet or portal.
Properly used and designed, taxonomies provide a consistent, scalable, stable means of organizing even vast quantities of data. They provide a navigable foundation enabling the logical, intuitive access of data. Improperly designed, taxonomies become a maintenance nightmare. The following is a conceptual explanation of the tenets of good taxonomy design as the basis for an information or knowledge management system.
Taxonomy development and (document) content indexing processes should be completed as the first phase of any information management or knowledge management design. Here's why.
Taxonomies are collective tools that reduce complexity by suggesting a logical, ontological, i.e. culturally founded, hierarchical representation of categories. Indexing represents the process of applying these ontologies to any particular document's content in order to normalize its content. By passing through the indexing process, the document is consistently aligned against a taxonomic standard.
If, on the other hand, you organize data based on the experience or perspective of a given set of people--even subject matter experts--you will need to rebuild your organizational structure each time something changes, for example when a "new" expert is added, or a different or additional set of terms needs to be considered, etc. Indexing on the basis of a group of individual perspectives effectively shatters any attempt at establishing corporate consistency into a multitude of insupportable idiosyncrasies.
The taxonomy development and indexing phase therefore is intentionally user independent and driven entirely by content. Its goal is the creation of a stable, foundational structure that supports a corporation's need to properly and consistently index its corpus of data.
It is not until the second phase of knowledge management system design, the dynamic classification phase, that you tap into individual expertise. This phase of development supports the real-time classification of information--from the users perspective--to providing tolls enabling the user to "slice and dice" data in the way that makes the most sense to them given their unique perspective and the problem they are trying to solve at the time. The success of an individual's ability to classify data in this way, however, depends upon the proper and successful completion of the taxonomy design and content indexing phase.
This article will focus on indexing and taxonomy design but will touch on the issues that companies face when they blur or reverse these two development phases, which happens surprisingly frequently.
Where to Begin
First, you will need to identify the paradigms of information your customers/employees are interested in accessing. From this you can construct an ontology--reflecting the unique way your corporation chooses to view these paradigms or groupings of data. Next, you should find and incorporate thesauri that best describe this ontology. The thesauri will provide a controlled, agreed upon vocabulary for that information, generally standard to an industry, including source terms, related terms and synonyms. For example, a pharmaceutical company may wish to use MeSH (Medical Subject Headings), while a defense company might want to use the DTIC (Defense Technical Information Center) thesaurus. These thesauri layered upon ontologies from the basis of your taxonomies. So what does it take to create a good taxonomy?
Tips for Good Taxonomy Design
Generally, it's better to develop multiple taxonomies each focused on a particular sphere of interest, versus creating a single, multi-purpose taxonomy. Good taxonomies will also contain certain characteristics described as follows.
Depth: Taxonomies should include no more than seven descending levels. Five levels is a more optimum number. Figure 1 shows a taxonomy design canon including the characteristics of each level.
[FIGURE 1 OMITTED]
The dotted line is where the "ontology" created by the company meets reality. Generic categories and terms found above the middle of the schema represent the level at which we start to describe the real world with our mental and linguistic tools; meaning we can touch a gun or a sign, whereas it would be hard to touch an ordnance. There are typically two ontological levels above the dotted lines. These are group nouns, collective terms used to "glue" the descending concepts together. The items at and below the dotted lines become increasingly specific, as does our ability to describe the world at a more granular and specific level. This way of deriving and compounding words to extend our grasp of the world is consistent in any language.
Width: Some taxonomies are up-heavy including too many higher levels or levels of equal definition. Synonyms start appearing as levels, creating ambiguities in logic and therefore instability as shown in Figure 2.
[FIGURE 2 OMITTED]
The problem with the levels in this taxonomy should be apparent. What's the universally understood difference between unwelcome and unpleasant? Why should one term be on top of the other? Wouldn't the logic work perfectly well in reverse? This "taxonomy" is really a classification-meaning it probably reflects the view of an individual, but does not indicate a genus to species sequence of relationships. In a taxonomy, this collection of synonyms should be collapsed into a single level.
Balance: Another problem can occur when an ontology is not used as a starting point or as an overlay for a taxonomy. In Figure 3, the root words are too numerous, creating an overly flat structure. This company has used a thesaurus but not an ontology. You need both. In a proper taxonomy you would expect to find perhaps a single root, such as Accounting with second level categories such as accountants, accounting firms, each with their subsequent levels and so on. You would not see non-related root words, such as acceptance and accountability, within this taxonomy.
Another characteristic to watch out for is unbalanced structure. This is shown in Figure 4 where you have one category (Acceptance) indicating one path and another category (Accidents) providing hundreds of paths.
Single Path Progression:
Consider the following real-life example of the confusion created when information is organized based on the experience of experts, versus from a taxonomical structure (Figure 5). This example was taken from a "taxonomy" used by a tax analyst firm. You see that assets and liabilities are duplicated in two different paths. Also note that if you reverse the structure, putting individuals and corporations under assets or assets and liabilities under individuals, the same logic applies.
[FIGURE 5 OMITTED]
This doesn't pass the first test of a proper taxonomy. An individual is not an asset, nor is a liability a corporation. These relationships don't reflect a proper genus to species relationship. Whenever a path is duplicated or if a path can be correctly reversed using the same logic, the structure is not taxonomical. Remember, a classification is a user-definable way of slicing and dicing data; a taxonomy is a uniform, non-changing structure for organizing data. The view above is a classification. It is a reflection of the way tax experts may wish to view information given their experience.
Figure 6 shows the proper taxonomical structure for this same information. These two dimensions should actually constitute two separate taxonomies.
[FIGURE 6 OMITTED]
In this structure, the information is equally accessible, but it is organized in a non-changing and universally understood hierarchy.
Once your taxonomies have passed these simple, glanceable structure tests, you should further test them by running them against your information corpus. This next step is a further test of the quality of your taxonomies, showing how well they perform against real data, and giving you another opportunity to adjust and refine their structure before you move to the next step in knowledge management design
Test Your Taxonomies
Before framing your documents against a taxonomy--essentially an indexing process--keywords, concepts and entities must be recognized in order to provide an accurate and precise understanding of the content of each document. In the best systems, a sophisticated linguistic analysis process is employed to correctly identify the phrases concepts, entities (collectively referred to as tokens), etc., found within documents. This linguistic analysis process generates a first level of normalization (stemming) on textual content. For example, if your organization chooses to use the word "gun" to represent everything from a pistol to a rifle, then all documents referencing these sorts of firearms will appear under the category of "gun."
Once the tokens are extracted, they can be run--or latched--against one or multiple taxonomies, resulting in a very rich index of the contents of your documents. This indexing process creates a semantic signature for each document. The resulting semantic signature ranks all the taxonomic categories, which have been linked to each document token, along with additional information including the location of these tokens in the document, etc. The semantic signature--or collection of metadata--within each document is then retrievable as an XML representation of the contents of each document.
As a result of this process, you can view the "latching scores" of your documents, another indicator of the health of your design. You'll see concentrations of latches in certain categories, which may be overly deep or wide, or clearly duplicated. Taxonomically based indexing technology will produce tables with additional metrics. These metrics will show which categories are overproducing tags and which aren't producing enough. Figure 7 shows the distribution of category production on a vertical scale of depth through the taxonomy. You are looking for a nice bell curve on both sides.
[FIGURE 7 OMITTED]
Using this and other latching metrics, you can go back, adjust the ontology as appropriate and re-run the documents again, testing the results until you sec a proper, balanced structure. Once achieved, the data is ready for import to your content management system, portal, or whatever information management or viewing mechanism you choose.
This description is intended to provide a conceptual view of the proper design and components of taxonomy design and its function within the initial development phase of a taxonomical knowledge management structure. Like most design projects, the discipline is in the details and good taxonomy design generally requires the specialized expertise of a librarian or taxonomy expert.
To summarize, companies should begin by identifying the groupings of data its employees will need to access. From this, an ontology reflective of your company's unique view of its world can be created. Next, companies should investigate and incorporate thesauri that best describe their ontology. Companies can then begin taxonomy design, making sure each taxonomy used has the right number of levels, depth, width and structure balance. Using the results generated when documents are initially latched to these taxonomies, companies should then re-review all categories making sure that the document levels are balanced and reasonable. If not, the company can go back and clean up its ontology, re-running the documents until the structure is correct.
And yet, as important a foundation as a taxonomical structure is to knowledge management systems, ideally it should be invisible to the end user. Users should take for granted that they have reliable, comprehensive access to all the data they need. Instead, users should be empowered to connect data--to identify previously un-recognized relationships and build their own unique webs of related data that may be disconnected in time, geography and even domain. But all of this begins with the sound underlying structure of a taxonomically indexed corpus of data.
"A Roadmap for Proper Taxonomy Design: Part 2" will appear in an upcoming edition of Computer Technology Review.
Figure 3
Acceptance Product Acceptance
Accountability Social Responsibility Social Investing
Accountants Public Accountants CPAs Attorney CPAs
Accounting Firms Big Five Accountant Firms Big Six Accounting Firms
Figure 4
Acceptance Product Acceptance
Accidents Accident Prevention Aircraft Accidents and Safety Air Traffic Control Hijacking
Boating Accidents and Safety Construction Accidents and Safety Electrcutions Falls Firearm Accidents and Safety Household Accidents and Safety Nuclear Accidents and Safety
Occupational Accidents Industrial Accidents
Occupational Safety Indoor Air Quality
Railroad Accidents and Safety
Ship Accidents and Safety Lighthouses
Swimming Accidents and Safety Drownings
Traffic Accidents and Safety Hit and Run Accidents
www.convera.com
Dr. Claude Vogel is CTO of Convera (Vienna, Virginia)
Sunday, May 23, 2004
Business 2.0 - Web Article - Management by Blog?
Sometimes the next big thing on the Net reshapes the online world (universal e-mail, a graphical browser for the Web); sometimes it evaporates upon contact with business reality (PointCast, anyone?). Wise companies explore new trends cautiously, and that seems to be what's happening with weblogs.
Most of the companies I've observed using blogs are trying it on their customers before unleashing it internally on their staffs. The external need, apparently, is more pressing. Many businesses already have other systems in place for managing internal information, ranging from simple brown-bag lunches to overkill knowledge-management regimens. But companies are always looking for better ways to touch base with existing and potential customers, and there's no hotter way to communicate on the Net than via a weblog.
Jason Butler, senior product development manager for regional job-search site BostonWorks.com, supervises two blogs, one for job seekers and one for human-resources professionals. "We opened up the HR blog because we want to be able to help people using our products," Butler says. "It's a tool to help them be better HR people, better managers. We're on the Web all the time, learning about our industry. Using the blog, we can get that information out so the community can benefit from our work." Similarly, the mission of the job-seekers blog is to keep the attention of people looking for new gigs by sharing interesting nuggets Butler and his colleagues have found on the Web.
There's no crying need for a staff blog, Butler says. "We have an internal system for project updates, a page on the intranet. There's no reason that couldn't be a blog. Right now, we see blogs more to look out, to communicate with our customers, and to solicit suggestions from them."
Currently the theoreticians are more excited about internal blogging systems than are the people who actually have to implement them. Earlier this month, on his widely read weblog, Biz Stone predicted that "blogging in the business community is about to be a big deal. When Google bought Blogger, a record skipped, the music stopped, and business folks turned their heads toward the blogging phenomenon." Stone says he thinks the most immediate uses of blogging in corporations will be in the area of knowledge management: "Companies are going to want to capture people's experiences so when they leave the company they don't take everything with them."
Stone acknowledges that these systems are not in place, but he maintains that they're inevitable. "It's only a matter of time before we have a blogging system that's able to measure the intellectual climate of employees, that can get at the sorts of questions that managers need to know the answers to. What do people think of the new parking garage? What are smart people talking about? What's on their minds? It's a great, nonintrusive way of seeing what is happening in your organization."
Many employees might feel that such a system is akin to management eavesdropping on water-cooler discussions. The internal weblogs I've seen work are those that track an idea's progress from offhand notion to fully matured proposal. I have seen three such blogs, always-on virtual whiteboards that have sped development and kept the status of projects clearer than they'd been before. They don't attempt to capture an organization's mood.
Such systems are not for every company, and they're far from widespread. And such success depends entirely on an individual firm's culture. If the company personality is too buttoned-up or secretive, a blog initiative will either fail to take off (there's nothing lonelier than a blog that doesn't get updated) or deteriorate into something unhealthy. The internal blogs that succeed will be safe, clean, well-lit virtual places in which diverse opinions are welcome and ideas -- not people -- are judged. Companies should always explore new ways of getting messages out and new tactics for fostering idea-exchange among the staff, but right now the blogging action is almost exclusively for external readers.
Chapter 8
Using Blogs in Business
Business 2.0 - Magazine Article - Acing the Exit Interview: "Acing the Exit Interview
How to mine the data in your workers' heads before the best ideas walk out the door.
By Paul Kaihla, May 2004 Issue
How's this for an eye-popping stat? About two-thirds of Lockheed Martin's (LMT) 130,000 employees are expected to quit within this decade. It's one more effect of the baby boom: 30 million of America's most experienced workers will soon be leaving the workforce -- and taking their institutional knowledge with them.
How to plug the brain drain? Knowledge-management consultants suggest a kind of Vulcan mind meld with anyone eyeing the door. 'We want to know how someone came up with that multimillion-dollar product,' says Larry Todd Wilson, who trains 'knowledge harvesters' for Halliburton, a military and oil-services contractor. "
KM Jobs
Job Information
Job Posting Reference : S-2004-05-00022
Agency Name : STB
Job Title : Manager, Information & Knowledge Management
Job Brief : Development and implementation of STB's KM strategy
Job Description : You will be responsible for the development and
implementation of STB's KM strategy. You will implement knowledge management
initiatives, design KM processes and guide their implementation in STB,
identify areas where information management technologies can be leveraged,
drive change management activities, establish KM measurements, and manage
specific KM elements such as the corporate taxonomy.
Job Requirement : As a degree holder, you should have at least 5 years of
relevant working experience, including a good understanding of KM principles
and processes, and experience in implementing these within an organisation.
Good knowledge of Information and Knowledge Management processes is
essential, along with excellent change management and communication skills.
Experience in developing taxonomies would be desirable. A background in
Information Technology, while useful, is not a requirement.
Posting Date : 15/05/2004
Closing Date : 24/05/2004

Map of China

Testing #2
Sunday, May 09, 2004
WIKI
Wiki is in Ward's original description:
The simplest online database that could possibly work.
Wiki is a piece of server software that allows users to freely create and edit Web page content using any Web browser. Wiki supports hyperlinks and has a simple text syntax for creating new pages and crosslinks between internal pages on the fly.
Wiki is unusual among group communication mechanisms in that it allows the organization of contributions to be edited in addition to the content itself.
Like many simple concepts, "open editing" has some profound and subtle effects on Wiki usage. Allowing everyday users to create and edit any page in a Web site is exciting in that it encourages democratic use of the Web and promotes content composition by nontechnical users.
How blogs and wikis can help knowledge management
Knowledge management is one of the hottest business topics around at the moment, not least because organisations increasingly realise that the store of knowledge held by their employees is one of the main ways in which they can differentiate themselves from their competition. Phrases like “our people are our greatest asset” are proof that organisations are beginning to realise that capturing knowledge, and using it to add value, is one of the most important problems that they face.
One of the well-established models of knowledge differentiates between “tactit” and “explicit” knowledge. Explicit knowledge is the formally-expressed knowledge that’s found in books, manuals, data and formulae and the like; while tacit knowledge is the highly-personal “what we know” - insights and intuitions. The problem for organisations is that it’s often tacit knowledge that’s the most vital - but at the same time, it’s the most difficult to capture and classify.
The creation of knowledge within an organisation occurs as a result of the interactions of explicit and tacit knowledge, in the process of knowledge conversion. This is where both types of knowledge increases in both quality and quantity. One useful model of this process is the SECI process - which stands for socialisation, externalisation, combination and internalisation. In this post I’ll explain the SECI process and explain where wikis and weblogs can help.
Socialisation
From my window, I can see the great cathedral of York Minster, which was built over 700 years ago by generations of skilled craftsmen. They would have learnt their trade through serving seven-year apprenticeships, where they picked up the skills of their craft by observing the more experienced craftsmen. This is a classic example of socialisation - the tacit knowledge belonging to the craft masters is passed on through shared experiences to the apprentices.
A blog can help the socialisation process by recording experiences as they happen - if you blog the progress of a project, then others can read about what you are doing and pick up on information and techniques that you have shared through the medium of the blog. Similarly, a wiki can be used to provide a forum that everyone can contribute to, building up the store of explicit knowledge within the organisation or the team.
And if socialisation is about sharing experiences between people, both can help to bring together widely-dispersed teams. Tacit knowledge is often passed on through “watercooler conversations” - so by providing a virtual watercooler, wikis and blogs can help to bridge the physical gaps that would otherwise prevent tacit knowledge being shared.
Externalisation
Much of the knowledge that we have about how the great cathedrals were built has had to be painstakingly recreated by historians and archaeologists because their builders didn’t leave any written records of what they did and why. Their knowledge was tacit, and because it was never externalised - or transformed into explicit knowledge - ultimately it died with the craftsmen that it belonged to.
If organisations need tacit knowledge to be converted into explicit knowledge in order for it to be best-exploited, then wikis and weblogs have a part to play in the socialisation process. Both aid in capturing the tacit knowledge by making it easier to record - by quickly writing a blog entry, or updating a wiki page, your tacit knowledge can be recorded and made explicit. The externalisation process also benefits from feedback - because wikis can be rapidly updated, the knowledge that they contain can become much more accurate as a result of
Combination
Archeologists and historians have spent entire careers dedicated to learning about how and why the great medieval cathedrals were built, and what we know about them today has been built up over a period of time. Later historians have used earlier research as a starting point, and built upon it with their findings.
Combination is the process of creating more complex and systematic sets of explicit knowledge - it is combined, edited or processed to create new knowledge. Blogs can aid in this process firstly by making the explicit knowledge available in the first place, and then making it possible to add to what exists through linking, quoting or commenting. A wiki enables rapid creation of explicit knowledge, but also makes it incredibly easy to edit and combine. And both provide a readily-accessible store of the new knowledge.
Internalisation
The medieval carpenter learnt on the job - through a process of socialisation as they watched their masters at work and copied them. The modern carpenter will go through a similar process, but a great deal of their knowledge will also be picked up through formal training - as they read the textbooks on the properties of their working materials, and manuals on safe ways of working. Over time, this knowledge will become second-nature to them - it will have been internalised, or transformed from explicit to tacit knowledge.
By providing a way of creating explicit knowledge from the store of tacit knowledge around the organisation, blogs and wikis can aid the internalisation process. Reading the progress of a project through a blog archive, or following a procedure that’s been documented in a wiki enables an individual to convert this into the tacit knowledge that will allow them to be effective in their roles. But having internalised the explicit knowledge, this can then lead to a new spiral of knowledge creation - tacit knowledge accumulated by the individual can be the trigger for new knowledge creation when it is shared with others.
Summary
Organisations face tremendous challenges in handling the knowledge that enables them to create their products and deliver their services. Blogs and wikis have an important part to play here - by allowing the information to be captured and shared rapidly and easily, the organisation can be able to convert the tacit knowledge held by its people into explicit knowledge that can be shared. Building upon existing knowledge is also aided by blogging and contributing to wikis. And they can provide a powerful means of getting the information to the people that need it, enabling them to become more effective in their roles, and in turn generate more tacit knowledge that can be captured and shared with others.
Further reading
The SECI knowledge conversion process was devised by Nonaka, Toyama and Konno in their paper “SECI, Ba and Leadership: A Unified Model of Dynamic Knowledge Creation” which appeared in volume 33 of Long Range Planning. The original paper is reproduced in Managing Knowledge: An Essential Reader, published by the Open University and Sage Publications - a good introduction to the more academic end of the knowledge management debate.
Posted by Tim at May 1, 2004 07:16 PM | TrackBack | Category => Blogs, Clutter Management
How to Write a White Paper – A White Paper on White Papers
By Michael A. Stelzner
About the Author: Michael Stelzner has written more than 100 papers for high-technology corporations on topics ranging from artificial intelligence to storage to the Internet. Michael's clients include Fortune 500 companies such as HP, Motorola, Intuit, Cardinal Health, Acxiom, Quantum, Compaq and Seagate as well as small emerging startups.
So you've decided you need a white paper. What exactly should the objectives be? Will the paper be well-received? How long should it be? Who will write it? These and many other questions are common concerns that should be addressed from the start. The good news is you are not alone! Since its first edition in early 2003, more than 6,000 people have read this paper. It is my hope that it leads you in the right direction.
This paper's objective is to guide you in the process of developing effective white papers and persuasive business documents.
What is a White Paper?
The term white paper is an offshoot of the term white book, which is an official publication of a national government. A famous white paper example is the Winston Churchill White Paper of 1922, which addressed political conflict in Palestine.
A white paper typically argues a specific position or solution to a problem. Although white papers take their roots in governmental policy, they have become a common tool used to introduce technology innovations and products. A typical search engine query on "white paper" will return millions of results, with many focused on technology-related issues.
White papers are powerful marketing tools used to help key decision-makers and influencers justify implementing solutions. For some examples of white papers used in the technology marketplace, click here.
Know Your Audience
Perhaps the biggest mistake white paper writers make involves not properly understanding the disposition of their readers. Instant affinity is key. A white paper must quickly identify problems or concerns faced by its readers and lead them down the path to a solution provided by your product or service. Different types of readers look at the same problems from different perspectives. For example, an engineer might care about technical nuances, whereas a CIO is more interested in business benefits. In the case of high-level executives or managers, their busy lifestyle means they have extremely short attention spans, an important consideration when writing to this type of audience. If you do not grab the reader's attention in the first paragraph, you will never achieve your objectives.
Decide on an Approach
EXAMPLE A:
Title: Groundbreaking TechWidget by XYZ Company Solves Time Management Dilemma!
Opening Sentence: XYZ Company has done it again; another great TechWidget invention can help you overcome time management challenges.
There are really only two ways to write white papers: (1) by focusing on your self-interests or (2) by concentrating on the interests of your readers. The self-interest or "chest-beating" approach focuses exclusively on a product, service or solution by expounding on its benefits, features and implications. While effective in some circumstances, this approach is best left for something other than a white paper, such as a data sheet or product brief.
The self-serving approach is often focused on the mistaken belief that people like to read boring details about why your product is the best thing since the invention of the Internet. This method is an ineffective approach to writing that turns most readers off immediately. If you want your customers to actually read the paper, you should try to gain affinity with them right away. It should be noted that it is perfectly appropriate to touch on product features and benefits if they are carefully crafted into the white paper.
EXAMPLE B:
Title: Solving the Time Management Dilemma with Technology
Opening Sentence: If you find it difficult to manage your time effectively, a new class of technology products may be the solution you are looking for.
The alternative approach, and the one I strongly recommend, is to focus on the needs of your readers. This can be effectively accomplished by leading with the problems your solution overcomes, rather than the actual solution itself. To many people, this seems counterintuitive, but it really is just the opposite. By focusing on the pain points experienced by the reader and talking about the problems caused by those pains, you are establishing credibility with the reader and simultaneously filtering out unqualified customers.
Consider the two examples in the sidebars. Example A does mention the problem, but it is tainted by self-serving mentions of the company and the product. Contrast that with Example B, which focuses exclusively on the problem and hints at the solution in a broad sense. Readers will feel more inclined to read Example B because it seems more educational to them. They have the chance to learn about a new technology that could solve their problem. With Example A, they learn more about the company and the product and less about the solution. Readers of Example A may never get to the point where they understand what the solution is. By describing problems, you are really developing an important affinity with the reader.
You can take it a step further by looking at issues such as historical precedence, describing new classes of solutions that address the problems and even identifying what to look for in a solution, while never once mentioning your product name or company (at least not yet). This altruistic approach will score major points with the reader and greatly increase the likelihood he or she will actually read the entire paper.
If you like what you have read thus far and are interested in reading this paper in its entirety, all we ask is that you simply complete the form below and the paper will be e-mailed to you.
Archives
12/01/2003 - 01/01/2004
05/01/2004 - 06/01/2004
06/01/2004 - 07/01/2004
