In the past when I have written about MetaData, I talked about how it could be used to help define a project or test. Today I want to briefly revisit the topic to discuss open-topic versus closed-topic metadata comments.
In cases where you have open-topic metadata users are free to create any metadata tags. During the initial discovery phase of a project, this is a powerful ability allowing users to define the key descriptive elements of the work in progress. On most projects, as the development matures, the rate of creating new metadata tags declines. At some point, groups should consider moving to a closed-topic metadata tagging system.
Why closed? How much
As a general rule of thumb, most objects support 5 to 10 metadata tags. More than that and the information contained in the tags is redundant. Creation of additional tags ends up being a burden in N ways
- The meaning of the tags become unclear, as the gradation between them becomes smaller
- Redundant tags creep in (truth be told my blog post uses both the #MBD and #ModelBasedDesign metadata tags)
- People search for the wrong thing; if I look in my database for #MBD and the post is tagged #ModelBasedDesign, I will miss that post.
The comments on metadata, of course, can apply across different applications. The use of categories quickly becomes useless when the categories become so small you cannot find them nor fit anything into them. #HopeYouLikedIt, #PleaseComment, #NoNeedForHashTags
Please forgive the early post…
When developing a control system feedback is critical; in creating a company wide software proces feedback (from your employees) is even more importaint. What is the best way to gather that information and what is the information that you should be collecting?
What did your bug reports tell you?
Bug tracking systems serves as the “first pass” for information reference. When developing the software process a category of “workflow issues” should be included in the tracking software. These workflow bugs will show problems related to
- Poor documentation: The primary way users learn about the Model-Based Design workflow is through the documentation.
- Architecture interfaces: Poor interfaces, either for model or data integration will emerge as new design patters are exploreed by new groups. The process adoption team must determine if the interface should be extended or a new interface defined for the group specific requirements.
- Test failures:
- Modeling guidelines: Failures in modeling guidelines will show where users have difficulty in conforming to modeling standards.
- Regression tests failures: These can indicate an improperly defined regression test system. During the inital development of the test environent it is common for there to be errors in the system.
Direct feedback / viewing
At the one, two and six month marks groups new to the process should be brought in for a formal process review meeting. During the meeting the following activities should take place.
- Design reviews: The models, tests and data managment files should be reviewed to ensure that best practices are followed.
- Pain points: Request feedback from the teams to capture existing pain points.
Collecting feedback from new teams is critical understanding where processes can be improved. The development is as always an iterative process requiring input from teams outside the inital “core” team.
For this video blog, I cover how to speak about ROI for Model-Based Design projects.