Scriptorium Publishing

content strategy consulting

spacer

Resources

spacer

Webcast: The many facets of content strategy

July 16, 2014 by Sarah O'Keefe

In this webcast recording, Sarah O’Keefe discusses the future of content strategy.

The purpose of content strategy is to support your organization’s business goals. Content strategists need to understand how content across the organization—marketing, technical, and more—contributes to the overall business success.

The many facets of content strategy from Scriptorium Publishing
more ↓

Categories: Content strategy, applied, Content strategy, theory, Opinion, Resources, Webinars | Tags: business, content strategy, marcom, tech comm | Permalink | 0 comments

Tweet
spacer

Managing DITA projects (premium)

July 11, 2014 by Alan Pringle

A DITA implementation isn’t merely a matter of picking tools. Several factors, including wrangling the different groups affected by an implementation, are critical to successfully managing DITA projects.

Note: This post assumes you have already done a content strategy analysis, which determined the DITA standard supports your company’s business goals.

Wrangling people

Good content addresses its audience at the right level. During a DITA implementation, you must do the same when working with the different groups affected by the DITA project. Talk to them about their particular concerns and requirements in language they understand.

spacer

Managing DITA projects is a lot like running a three-ring circus (flickr: Nathan King)

For example, engineers who occasionally provide bits of content don’t care about all the fancy features of the XML authoring tool used by the tech comm group. The engineers are far more interested in contributing content as quickly and easily as possible (through a simple web interface, perhaps). Also, engineers will understand the value of DITA’s topic-based, modular approach when you compare DITA to object-oriented programming.

Content modeling

Even though you’ve chosen the DITA standard as your XML model, you still have to do content modeling. You can start by creating a spreadsheet cataloging the tags in your current template and then list out the DITA equivalents. Seeing how existing information works in DITA is a good way to learn the DITA structures—but be careful not to focus exclusively on how you created content in the past. You probably didn’t follow DITA best practices in your old content, so be aware of what you may need to change—or throw out altogether—to develop quality DITA content.

You also need to figure out how to track metadata—data about your data. In DITA, much of your metadata goes into element attributes. You can use those attribute values in many ways, including filtering what content does and does not go into output (conditional text), narrowing your searches of DITA source files, and including processing instructions for particular forms of output (for example, the IDs for context-sensitive help).

Metadata isn’t just at the topic or paragraph level, either; you need to think about its application at the map file level. (The map file is what collects topics together for an information product such as a PDF book, a help set, an ebook, and so on.) In map files, there are elements expressly for tracking publication-level information: publication date, version, release, and so on. Don’t fall into the trap of thinking metadata = attribute.

There are other content modeling considerations. Do you need to create a new element or attribute type through the process of specialization (creating a new structure based on an existing one)? Also, how are you going to use special DITA constructs, such as conrefs (reusing a chunk of text by referencing it) and keywords (variables for product names and other often-used words and phrases)?

Evaluating tools

Choosing a tool is not a content strategy. Choosing tools that support DITA does not mean you have a DITA strategy.

Ensure those evaluating tools aren’t suffering from “tool myopia.” They should not use their current tool’s capabilities as benchmarks for a new tool, particularly when a group is moving from a desktop publishing tool to an XML editor.

Also, tool requirements go beyond primary content creation. Think about the entire workflow; get input from all the groups affected by your DITA implementation, and remember that a tool for one group may not be a good fit for another.

Create a weighted spreadsheet so that the really important requirements get precedence during ranking. Also, you don’t have to limit your requirements to “yes or no” questions. If necessary, create short narratives that explain specific use cases and ask vendors to demonstrate how their tool supports those use cases.

Parse vendor claims carefully. Ask specific questions about a tool’s support of DITA constructs: How does your tool support the use of conrefs? If you’re not comfortable with the answers you get from a vendor during the evaluation process, you’ll continue to be uncomfortable (and unhappy) after you purchase a tool from them.

Strongly consider involving a third-party consultant (yes, Scriptorium!) in developing requirements and vetting vendors. The consultant can help you cut through the bunk and also act as the “bad cop.”

Understanding outputs

Default outputs (PDF, HTML, and help) generated from the DITA Open Toolkit are hideous—but fixable. Don’t be scared off by the ugliness of default output, and don’t let project naysayers use the default formatting as a cudgel: See, we can’t use DITA because the Open Toolkit doesn’t create decent output!

Take a software development approach to your outputs. For PDF content, you can create requirements that specify the formatting for different heading levels, admonishments, body paragraphs, steps in procedures, tables (both for table formatting itself and the text in the table), and so on. If you want to re-create what you have in a template file for your current text processing tool, you can take all the formatting aspects (font, line spacing, indent, color, and so on) from it. Detailed specs are going to make it easier to modify the stylesheets that transform your DITA XML into PDF, HTML, and other outputs.

Get significant budget for modifying the transformation stylesheets. PDF transforms are the most complex and expensive; transforms for web pages, ebooks, and so on may run less, but that depends on their complexity. Even if you have transform experience in house—and most companies don’t—you still need to block off that person’s time, and that is an expense as well.

Considering conversion

Before you even think about conversion, research DITA best practices. Reading the DITA spec won’t help you with that. Instead, get a book such as The DITA Style Guide (download the free EPUB edition) that offers advice on the best ways to implement the many elements and features in the DITA model.

If your legacy content is not tagged consistently and has lots of formatting overrides, you cannot convert that content cleanly through scripting. Also, if your content is not easily broken into chunks that are the equivalent of DITA topics, you may be better off just starting over.

Even if you’re not going to convert legacy information, you still need content to test your DITA model and the transforms for output. Create what I call a “greatest hits collection” of DITA files that represents real-world content you create and distribute. I’d recommend 50 to 100 pages of content to be sure you’re thoroughly testing your processes and outputs.

I have yet to see a conversion project completed without some sort of complication: there is going to be bad tagging, layout quirks in your source, or some other lurking horror that will pop up. Resign yourself to dealing with these surprises, and give yourself lots of lead time so you can handle those issues.

Read more tips on converting legacy content to DITA.

Have questions about managing your DITA project? Contact us. Also, watch this recording of my webcast on DITA implementations:

Managing DITA implementation from Scriptorium Publishing

 

 

more ↓

Categories: DITA, theory, Opinion, Resources, XML and DITA | Tags: content strategy, dita, premium | Permalink | 2 Comments

Tweet
spacer

Webcast: You want it when?!? Content strategy for an impatient world

May 21, 2014 by Sarah O'Keefe

In this webcast recording, Sarah O’Keefe discusses how content initiatives are putting new demands on technical communication—improving customer experience, building interactive documents, including advanced visualizations, integrated translations, and more.

To meet these requirements, we must increase the velocity of technical communication. That means stripping out inefficiency and creating content development workflows that eliminate wasted time. Most publishing systems are ill-equipped for flexible, fast, and changeable production. Instead, they are intended to support a manufacturing process, in which the result is static (like print or PDF).

For today’s workflows, this approach is not good enough. We must increase our velocity so that we can support the requirements that are coming.

You want it when?! Content strategy for an impatient world from Scriptorium Publishing
more ↓

Categories: Content strategy, applied, Content strategy, theory, Resources, Webinars | Permalink | 0 comments

Tweet
spacer

Webcast: The Bottom Line: Globalization and the Dependence on Intelligent Content

March 26, 2014 by Bill Swallow

In this webcast recording, Bill Swallow takes a look at intelligent content’s role in global markets, and how the entire content cycle directly affects a business’s bottom line (revenue).

Though we are often concerned with cost of translation when developing content for global markets, traditional cost reduction practices (translation memory, reduced rates) simply aren’t enough. The number one means of cost control when engaging global markets is being able to establish a profitable revenue stream by delivering quality product in those markets in a manner that is meaningful to them. By employing intelligent content with attention to globalization, we can ensure that the information we produce meets market and delivery demands in a timely manner.

The Bottom Line: Globalization and the Dependence on Intelligent Content from Scriptorium Publishing
more ↓

Categories: Content strategy, applied, Content strategy, theory, Opinion, Resources, Webinars | Tags: globalization, intelligent content, localization, techcomm | Permalink | 0 comments

Tweet
spacer

Content management and localization – finding the right fit

February 3, 2014 by Bill Swallow

A CMS can be a powerful addition to your content authoring and delivery workflow or your worst enemy in translation. Or both.

Although most content management systems do support localization, they do so in many different ways. None of these are inherently better than the other, but they can have some serious consequences when the scope of your localization needs differs from what the technology offers.

Some systems have built-in translation UIs, where you can log in and translate the content right there in the CMS. This works very well in cases where you crowdsource your translations for internal use. Your translators would merely log in, type in the translations as needed, and log out.

However, if you’re using your CMS to manage customer- or public-facing content, or if you want to leverage translation memory for future translation work, you may be in for a rude awakening. To leverage your translation memory, you would need to export or otherwise copy/paste your content into an external file for translation. This adds significant overhead (people needed to copy/paste the source out and translations back in), introduces substantial risk of human error, and ultimately will delay the overall effort.

Some systems provide a raw export of content in XML, which theoretically can be used anywhere. It’s XML, therefore universal, right? Well, not exactly. Chances are that your CMS uses a “special blend” of XML to manage your content that it alone understands. A raw export will indeed give you everything, and it will be up to your translators to figure out exactly what requires translation and what should not be touched.

Computer-assisted translation (CAT) tools that translators use can be configured to properly handle the markup (and not all CAT tools are created equal in this regard, either). This takes time to configure and requires a dedicated technician on the translation side to maintain these filters for you over time.

Many CMSes allow for what we’ll call “creative formatting”, storing hand-formatted content in CDATA sections in the XML. CDATA sections basically instruct XML parsers to ignore that section and allow the local formatting in those sections (usually in HTML) to prevail. The tagging within those sections will be visible to the translators, requiring them to either hand-code the markup into the translations or spend considerable time and effort filtering the files to handle the local formatting. Either way, this adds a considerable amount of time to the translation effort. Your best bet is to consult with your translation vendor ahead of time to determine the best course of action.

Some content management systems do try to fully support a translation workflow by offering an export in XLIFF format, because that’s what many translation tools use for translation. The problem with this solution is that XLIFF is not a hand-off format for translation, but an internal string management format for CAT tools. It stores language pairs (source and target) for every string needing translation, and is usually highly extended/customized by the tools that employ it.

What this means is that the translators will need to spend time hacking the XLIFF to remove the target language portions every time they receive a XLIFF file in order to get at just the source content. They then need to marry up their final translations to the source in the original XLIFF file so your CMS can import it.

Now, this post is not all doom and gloom. In each of the scenarios mentioned, there are ways to best handle the translation process. Just as not all CMSes are created equal, the same holds true for localization tools and workflows. In order to circumvent many of these pitfalls, involve your CMS vendor and your localization vendor at the beginning or as soon as possible. You should iron out your localization workflow with both parties, ensuring that the CMS meets your localization needs and that your localization vendor can efficiently handle what the CMS requires.

Choose wisely, and consider your localization workflow up-front. Your choices will affect costs, quality, and time to deliver your localized content. We are available to help you find the best fit for your content and avoid unnecessary pitfalls.

more ↓

Categories: Content strategy, theory | Tags: content management, content strategy, localization | Permalink | 3 Comments

Tweet
spacer

Webcast: Trends in technical communication 2014

January 17, 2014 by Sarah O'Keefe

In this webcast recording, Sarah O’Keefe and Bill Swallow of Scriptorium Publishing discuss what’s new in technical communication. Alan Pringle moderates.

Trend 1: People like their silos.

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.