The one common denominator in all the apparel PLM implementations I have done is clients not only underestimate the importance of data libraries but they also drastically underestimate the effort required in producing them. As the old saying goes “you only get out what you put in” and PLM is no different. People too often get consumed in meticulously revamping processes, lost in the shiny new functionality now at their disposal and downright giddy at the prospect of all the wonderful reports they can churn out. Building libraries of clean, rich and meaningful data is given somewhat of a second priority and only looked at once the new processes have been mapped out and configuration is underway. Now if we are looking at this in a linear fashion, then of course you don’t need library data until the configuration is complete but this is a sizeable task that takes time to achieve properly. I’ve lost count of the number of times a go live date has passed and the libraries are still sparse. What many people do not realise is library data is the foundation that any successful PLM system is built upon and without it PLM is just a fancy structure and the roof will come crashing down.
So first off, let’s examine what library data consists of and why it is so important for PLM to be a success.
Library data in apparel PLM consists of any piece of data that needs to be associated to products, or other records such as fabrics, on a repeatable basis. These can be categorised into simple modules of singular sets of data or complex modules that are data sets comprising of a number of other library data modules. Whilst there are many examples, common simple modules include:
Used to store both colour standards i.e. Pantone and also company colours. Colour libraries consist of thousands of colours that can be reused across multiple seasons to build seasonal colour palettes, style and material colourways and BOM (bill of materials) colour options.
Used to store all fabrics, trims and any other BOM components that need to be associated to styles. These are often categorised into house /core materials and seasonal materials, often organised by department, brand or globally shared.
Used to store all types of suppliers; agents, suppliers, factories and raw material suppliers etc. Can be assigned to both styles and materials for sampling and costing.
Examples of complex modules include:
Usually a combination of three different sub libraries; size categories, POMS (points of measure) and size charts/gradings. Size categories and POMs are reused across multiple size chart/grade rules to create templates that can be added to styles.
A product template is used as a starting point for all new style creation except for carryovers. A product template is usually created for any brand / department /product type combination. Any data that is standard across all styles of that type should be included in the template. For example, if a branded men’s shirt always uses certain trims these should all be included on the BOM in the men’s shirt template for that brand. Same goes for the size chart. A product template often consists of many other library data modules; materials, size charts, construction images, even sometimes colours and suppliers.
Whether simple or complex, library data can be used to quickly create new styles and is often used to build up other more complex sets of library data.
So why is it important in the success of a PLM implementation?
One of the key factors is usability. Nothing signals the impending failure of a PLM project like disgruntled users and nothing gets those users more disgruntled than repetitive data entry. When implementing PLM you should always try and follow the 80/20 rule – a couple of clicks to have 80% of the data populated as it is standard to the style type, the other 20% will be keyed in as it is unique to this particular style code. Whilst data-entry will always be seen as the bane of any product developer’s life the easier and more streamlined you can make this for them the happier they will be.
Following this 80/20 rule also does wonders for your efficiency. Without this core data easily and readily available within PLM, users have to spend time and effort tracking down the data they need for every single specification they produce. If all your house trims still reside in that down-on-its luck looking ring-binder that mysteriously moves around the office by itself, then any time savings your PLM system could bring are negated by the fact your Product Developers are playing hide and seek with the contents of the filing cabinet. Even if this sort of information is held digitally in spreadsheets, changes are much harder, not to mention repetitive, to manage and it is most tedious to have to add the same data time and time again to different styles. Putting it a reusable PLM library will not only improve your lead time but the mental health of your users as well.
Perhaps more financially quantifiable is the benefit that clean and extensive PLM data libraries bring in helping to eliminate costly errors. Firstly, everyone is looking at the same information and all styles reference the one source. If the price or status of a material changes then that change is propagated down to all the styles using that material automatically so Product Developers can make informed decisions about whether to continue using it. Working from a single point of reference for the grading of a product type standardises the fit across all styles in a season, reducing sampling costs and customer returns. And that 80/20 rule, well it goes a long way in helping to combat the unavoidable tendency for humans to err, again stopping those pesky mistakes in their costly tracks.
And finally, you want PLM to become a tool that users rely on. If they are faced with empty libraries and are constantly having to resort to looking in other places for the data they need then eventually they will just resort to using the legacy systems and ways of working. PLM will just become a hindrance that will get abandoned and that really will be a costly mistake.
So now we understand what PLM libraries are and why they are important, let’s examine how you should go about compiling them.
Library building consists of three main steps. The first is to examine what data you already have and to have a good spring clean to make sure you are only putting in relevant information. You also need to gather and build any information that needs to be in PLM that you do not already have. The second step is to get it into a format to be put into PLM. The third is to actually get it into PLM. This all sounds quite straightforward but this can be somewhat of a mammoth undertaking to do correctly and choosing the right personnel for each stage is also very important.
In a nutshell it can be broken down into gather and clean > format > load.
Gather and Clean
This by and large is the most important and time consuming step. The last thing you want to do is just dump everything you already have into PLM. This is the perfect opportunity to clean out all the bunk and make sure what you are going to put in is not only relevant but also accurate. Additionally, there will be new data to gather you don’t already have records for.
Go through all your house fabrics and branded trims. Have you used that item in the past three years? Will it be used in the future? Do you have a photo of the item? This is particularly important for trims, especially if your PLM integrates with Adobe Illustrator and Designers want to add them to their Design Cards.
Standardise those measurements. Do you currently have problems with fit or fit variance? Take time to correct and standardise the gradings. Build measurement templates for all your size category/product type combinations. Have you allowed for variations for knit and woven options? Which POMs are critical? How many optional POMs should be available for each product type? Do you have gradings for them? Do you have how-to-measure images available for each POM or size chart? Investing the time to create these will help reduce the sample iterations you go through and will ensure that fit is standardised across styles.
Colour libraries are often available as load files from the standard provider but do you have house colours that need to be input or your own marketing names for Pantones documented?
What product templates will you need? What data should be part of a product template? Are all the BOM items you need already included in your materials library?
For every library module you need to apply the same level of scrutiny and input.
So, when to start and who should do it?
As soon as you know you are going to implement PLM you should start the gathering and cleaning process. The reasons for this are twofold. Firstly, this takes a lot of time so it is never too soon to start. Much of the gathering and cleaning is independent of the PLM configuration you decide upon so you can start the process before you even begin to think about configuration – in some ways building the data libraries will help you identify the configuration requirements anyway. Secondly, with the exception of perhaps taking photos of materials, this effort requires knowledgeable personnel resource to achieve. It is likely that the same resources will also be required to input into the process and configuration workshops once the implementation gets underway. There is only so much you can remove these people from their current responsibilities in the business over any one period of time so the more you can get done beforehand the better. I’ve seen companies try and employ extra resource to help with this part of the library building but it just doesn’t work. You need the people that know the business inside out to answer the questions that need asking and build the best, most accurate and efficient templates.
This is the element of library building that does require the configuration to be documented. This is a case of making sure all your gathered data matches up to the structure, hierarchy and fields you have defined as part of your configuration. It’s making sure that all the column headers in load files match the database keys you have for field labels. It’s ensuring that field values match the options you have configured in drop downs. It’s a tedious job but someone’s got to do it. That someone doesn’t have to be key personnel. This is where additional, perhaps temporary personnel does make sense to utilise.
One of the common mistakes people make is assuming that library data can just be uploaded into PLM just before you go live. Simple modules can often be uploaded, and as these are what people often assume a library consists of it perhaps explains why library building is commonly underestimated. Complex sets however often have to be manually entered as they are too interconnected with other data sets to be able to load without issue. Manual data entry takes time so allow for it in your planning.
The configuration needs to be locked down and in place before the loading can start. Depending on your PLM provider and/or implementer they should be able to do all the automatic uploads for you, for example, materials, colours and suppliers, especially if you have invested time in getting the formatting correct. For the more complex sets such as product templates, if these have to be manually loaded then it can be beneficial to get your ‘super users’ to do these as part of their practise after training. That way they have real life data to use as part of their experience building before they are required to support the seasonal go live.
Unfortunately, as with most things of value, they need looking after. PLM Libraries are no different. Granted, the majority of the work resides in the upfront effort, but there will be ongoing maintenance required after go live and every season to keep your library in tip top condition. Probably the most significant of this ongoing upkeep will be the entry of seasonal materials. On a more ad hoc basis there will also be data to add when new product types are introduced for example.
Who is responsible for this depends on the amount of data that needs to be maintained on a regular basis as although more than likely not a full time role, if given as an additional responsibility it will still be a significant amount of extra work to take on. In the case of the seasonal materials, they need to be input as soon as they are available so this cannot take second priority in someone’s already burgeoning workload. Resource planning the library maintenance upfront will save a lot of dispute and stress further down the road.
Building the extensive libraries required to really get the most out of PLM and help ensure a successful implementation and roll out is a time consuming and often tedious process, that requires regular maintenance even after go live. However, if you do this correctly it can pay dividends in terms of return on investment and user adoption of your new PLM system.
If you would like to find out more about how Apparel Thing can help you build your PLM data libraries, then click here.