The Calcott Consulting Blog:

   Articles on Commercial Manufacturing:


Operation Excellence strikes again

July 9th, 2015 by

You’ve heard of it. You’ve probably experienced it. You may even have survived it. Yes, I am talking about “Project Operations Excellence”. Don’t get me wrong. The principles behind it are very good, even admirable. The premise is that we are operating presently as a result of processes that have evolved over time. A Band-Aid here, a Band-Aid there. The result is a set of processes that are inefficient not targeted to the customer and to be honest often counterproductive, sometimes counterintuitive. It does not sound good, does it?

Wouldn’t it be wonderful if we could deconstruct our processes and start again? That is what these companies sell you. The promised land of operations that are logical, lean (which brings me to another element of this concept, but I’m not getting drawn into that now – hint it suffers some of the same issues), lacking in superfluity, less is more, you get my drift.

So you sign on the dotted line and then in come the saviors. Most often it’s not the A team that did the sales pitch, the Dog and Pony show but if you are lucky it’s the C or D team. A couple of people who’ve seen the inside of an operations for a couple of hours together with a band of neophytes, fresh out of MBA school. They know the jargon, are excellent at powerpoints (I think that is one of the majors), the theory is at their command. Practical experience gained from case studies, that never go wrong.

Don’t I sound cynical and negative? No actually, I am not cynical or negative. I did not expect anything else. I’ve been through this exercise 5 or 6 times at different companies using some of the industry best companies and I’ve seen it at some of the companies I consult with. The problem is that we are our worst enemies. We do not challenge these saviors with data or facts or our experience. Just because they conclude something, does not mean it’s the best idea. It is an idea that should be considered. But often in these short 30 minute interviews to gather their data, they do not gather all the right information, they do not ask the right questions. Often it seems like they have gone in to these companies with the answer and are gathering data to support the results.

An example that came up recently, when talking with a client, related to an operational set up of a piece of equipment. On first glance, it appeared that after the equipment was set up, and then it was checked twice before using. The saviors immediately pointed out duplication and eliminated the second assessment. Nobody asked the question of why there were two checks. The junior person being interrogated did not have the answer. Voila, the step was removed. It was only afterwards, that a more senior person with experience of when the second check was put in place, that the explanation was clear. This piece of equipment had a tendency to drift in setting and the second check just before use was to monitor whether it had or not.

Perhaps the first check could have been removed? But that was never asked. An A team member with good process experience might have caught that one.

Modern day root cause analysis contends that silver bullets do not exist.  Rather it is a series of events (each incapable of causing the problem themselves) align and it is this alignment that causes the event.  So often we end up with corrective actions which are several to improve each element.

Is this example given an isolated incident? I contend no, because I have been into many companies reeling from these assessments. In fact one company that is in regulatory trouble now can trace their decrease in performance in operations, to an Operations Excellence episode. The results of the study were taken as gospel and implemented blindly. Clearly, all were at fault. An incomplete analysis, and accepting everything as correct etc.

So I caution everybody to think through how you manage an Operations Excellence project.

  1. Don’t assume that their answers are right – they are suggestions.
  2. They only get a snap shot of what is going on
  3. If you don’t ask all and the right questions, the answers may not be right.
  4. You are the ones who know what you do and why.

Remember that these people coming in to assess do their work and leave. They do not have to operate with what they leave. They are like a flock of seagulls, they fly in, squawking, leave you a present (hint, it is not paper) and fly off. What you should be looking for with an Operation Excellence is a team to evaluate, propose solutions and help you implement. And make sure you get the A team.

Managing a CMO can be tricky

June 15th, 2015 by

Much has been talked about choosing a CMO, developing contracts and establishing a quality agreement.  But that is not all.  It involves the establishment of operations and the development of a relationship and developing trust between both parties.  I have just written and had published an article on this topic.  It’s in the May issue of Pharmaceutical Outsourcing.  the article is entitled “Maintaining the relationship with a CMO”.  The link to the article is here

QA and Manufacturing should be working on the same team

May 15th, 2015 by

During the last couple of months I have been working with a company that is undergoing a transition as products move through their pipeline – quite a radical shift in strategy.  Over the years, their Quality Management System (QMS) has evolved.  But as with all evolution, the small incremental changes that occur might lead along a path to an improved system or an evolutionary dead end.  With this radical shift, they invited me to come in and assess what they are doing right, what they are doing badly and what, perhaps, they are not doing.

So the first step is a gap analysis which entails reading documents and records and interviewing people.  Fortunately, management is highly supportive of this exercise so I got extreme cooperation from all parties.  Documents (SOPs policies, quality manuals and quality plans) were supplied to illustrate the systems.  In addition, I requested records which were supplied readily.  These included deviation investigations, CAPAs and change controls, to name just three types.  While I cannot look at all, I asked that typical examples be shown.  I want to see the average type record – neither the best nor the worst.  During discussions, which ranged from 1 on 1s to small groups, people felt comfortable describing what they liked and what they found frustrating.  The interesting feature I noted was that the same frustrations were felt universally across the organization.  It did not matter whether it was QA, QC, engineering or production: all felt the same.

The analysis revealed that the three systems described were all weak.  While the company used an electronic system to track and manage the system, the end result, the reports, did not meet the requirements of either the practitioners or me.  That is, they were neither well written nor clear.  They lacked the clear logic to explain what had happened and why for deviations, how the CAPAs linked back to the deviations and the rationale for the change in the change controls.  And why was this?  In a way the people involved had treated their systems as hurdles that had to be overcome to move to the next step, rather than the tools they should be.  With the e-system, it appeared the goal was to move the document from their inbox to next person’s inbox.

They had lost sight that an investigation’s goal should be to:

  • Uncover what went wrong, identifying the factors that contributed to the issue (they got some of the factors but not all).
  • Recover the material or data (they were good at that)
  • Prevent it from happening again (not good at that)
  • Prevent similar things from happening elsewhere (almost totally absent)

In this case, the e-system had done a grave disservice to the company.  It had taken the thinking out of the process; it had subverted the process from serving the operations to a system that had to be served.  It had also driven behavior into short term thinking versus long term.  They had cultivated a fire-fighting mentality.

The exact same thing had happened in the change control system.  There was not a critical evaluation of any of the changes.  The changes were made to save materials rather than build a rugged process or system.

To combat this systemic problem (and all three areas were linked), we embarked on a training session.  It included three elements

  1. Back to basics of the three systems – explaining what the desired state was and how to get there.
  2. Teamworks which had real life examples of challenges using some of the principles and skills learnt.
  3. Practical examples from their company. I selected examples of deviations/investigations and CAPAs and change controls and worked through with the group some examples. After I gave my read on the situations, I let them loose to try out the new skills themselves.

During the second round, there was a breakthrough by one of the staff members.  Suddenly, the light bulb went off.  I have never seen such an excited person in all my years training.  I am going back in a couple of weeks to see how things are going.  I am quite optimistic.  I will keep you posted.

Effective auditing – one size does not fit all

April 15th, 2015 by

Every company I work with has a problem with their auditing program.  Some believe they are overdoing it and others feel they are not doing enough.  In a sense, they are both right.  In actual fact, they are all overdoing it in certain areas and underdoing in others.

Back when I started in Industry, and I am afraid to tell you exactly when, auditing programs were written in stone in SOPs.  The frequency, duration and number of people were defined and rigorously enforced.  Audits were conducted each the same way and lists of findings were assembled and sent to the auditee.  If lucky, the findings were responded to and CAPAs developed.  The report was closed out and filed.  After the prescribed period of time, the process was repeated.  Often the same findings were seen at the next audit.  So either the CAPA was not done or it was ineffective.  Not exactly an efficient, effective process, but it satisfied the regulators.

Today a program like that is just not acceptable.  Why?  Have the regulations changed? Have our expectations changed?  Has the world changed?  The answer to each is yes.

Over the last 20-30 years we have seen a lot change.  We have seen drug tampering (Tylenol and cyanide), counterfeit drugs in the market place (you get those emails for those drugs at unbelievable prices) and incidents like the Heparin / Baxter problem.  Both industry and regulators have taken note and reacted.  In the US and EU, regulators have recognized the problems and issued new regulations and guidances.  The Falsified Medicines Directive (FMD), Food and Drug Administration Security and Innovation Act (FDAsia) and the Drug Supply Chain Security Act (DSCSA) have been issued and are in the process of implementation.  So how does that fit into the auditing program?  It is because the auditing program is a tool that will enable you to meet the spirit of what these regulations are driving at.

We perform both internal audits of our own operations as well as audits of third parties.  These third parties include our CMOs, our suppliers of raw materials, excipients, actives as well as services such as testing labs, engineering functions and distribution to name a few.  The functions of an audit are many fold including a component of the assessment of whether we care to do business with an entity (the Vendor Qualification Program) as well as as a routine assessment of whether we want to continue to use them (continuous verification) and an assessment after some element has failed (for cause).

Each of these is approached differently, depending on the nature of why we are auditing.

  1. Vendor assessment – usually, you have never worked with the vendor before, or at least recently, so this is an exploratory audit to assure they are operating to an appropriate standard that is compatible with our expectations.  Because of this, the goal is to assess all their systems.
  2. Continuous verification – you have experience with this type of vendor.  You know what they do well and perhaps have identified areas where improvement might be needed.  You are often following up from previous audits or experience with their services (described in the annual product review).  So it is often more directed than the qualification stage of vendor assessment.
  3. For cause – something has definitely gone wrong.  So this is a very directed audit towards the areas of potential deficiency.  The outcome may be to continue to use or to terminate the relationship.

Which brings us to how to conduct an audit to add value.  ICH Q9 is a wonderful guidance that if used intelligently can aid you in developing a truly risk-based auditing program: that is to balance the “too much” versus the “too little”.  I highly recommend integrating this guidance into your auditing program.  Remember if you do not document your risk decisions, you will be found lacking by the regulators.

I use the old moniker of

say what you do,

                    do what you say,

                                          prove it and

                                                           improve it.

Put another way it is really documentsexecutionrecords and continuous improvement.

These following steps may aid you in defining the audit program.

  1. Never schedule your auditors more than 67% of their time and that includes prep time and report writing time.  The extra 1/3 is important for the unexpected such as the for cause audits, the new suppliers, the new emergency programs and also the deep dives you provide as a service to your internal customers.
  2. Determine the risk factor for the particular vendor.  That includes not just the service provided but the track record of each.  This will determine the frequency, duration and manpower needed. And this needs to be kept current because situations change.
  3. You have limited time at the vendor so use it well.  Prepare the outline of what you want to accomplish (the type of audit), what you know, what questions need answering.  If possible do work before you arrive.  That could be sending out a focused questionnaire to relatively simple elements (you can confirm when you arrive).  Even present to them a proposed agenda, so they are prepared and have no excuse when you arrive.
  4. So what do I focus on when I arrive.  A typical process flow approach is my choice.  For actives suppliers or CMOs, I walk the process with my questions and get my answers in situ.  Armed with my preparation work, I walk through the facility and quite prepared to stop even for a significant length of time to explore more if I sense an issue.  For testing labs walk the samples.  This is the execution component
  5. I look at paper work later and I focus on the various quality systems of interest.  I do not read SOPs or policies but rather focus on the records part.  I look at deviations and investigations, CAPAs, change controls and lot dispositions.  The threads I find lead me into the various other systems.  I find these systems are the pulse of the organization and tell you a lot about the company.
  6. If necessary, I go to SOPs and policies.  That is the documentation part to confirm that what they say corresponds to what they do.
  7. I also look at operations and people to detect signs of continuous improvement which is often picked up, not in documents, but in conversation.
  8. I usually look to see evidence of a modern approach to quality as evidenced by an active involvement of management.
  9. In the close out I gather the observations which I have ranked using the EU standard of critical, major and minor.  In the discussion I might even make some suggestions of how improvement might be made.  But it is the company’s decision on the how to address really.
  10. After you get home, make sure to follow up with requests for CAPAs after the agreed upon time frame.

BTW, one of my first stops is the bath room.  Not because of a medical problem but  to see how it is kept.  Companies that have a good QMS have clear bathrooms.  For those with QMS problems, the bathroom can be a telltale.

How to handle metrics that drive the wrong behavior

March 17th, 2015 by

Over my career I have lived and, unfortunately, died by Metrics.  What do I mean by that?  Metrics if developed carefully, and with thought, can help us attain our goals.  However, badly thought out metrics don’t just not help us from attaining our goals, they can actually prevent us from attaining them.  Because they can be counter productive.

Is this new?  No, I have seen it for decades both working in the industry but also as a consultant to the industry.  I find it’s amazing that these bad metrics are not isolated to companies with poor compliance records, but also to the companies who are leaders in doing things right.  Of course, with confidentiality, I can not name names, but I can talk about them and give advise in the public domain.

What are these bad metrics.  They are metrics that have been set up in order to measure and control certain outputs.  However, when set up, they encourage the wrong behavior.  How is that possible?    I will give you two examples.  Now, let me explain.

  1. First example

I was visiting a company recently for a training session on quality systems.  During the break, one of the attendees took me aside and described a situation at his company.  He asked the situation to be kept private.  And by that he meant that not only should I not describe it in  public describing his company but he did not want management at his company to hear about it ascribed to him.  Of course, I honored his request so you will learn neither who the person is or the company.

As with most companies, they wrestle with investigations taking long times to close out which leaves the company vulnerable both operationally and from a compliance perspective.  So to combat that and to drive closure, the company instituted a metric of “All investigations to be closed in 30 days”.  Depending on the number open during the year, the persons performance rating would be impacted.  Performance impacted equals decreased pay raise, bonus etc.  You get the picture.

Result, the number of investigations lingering past 30 days goes down.  Management is content and everything is improved.  Or is it?

The answer is no!!!!!  Yes, investigations are closed out quickly, but are they really completed and accurately describe what the root cause or contributing factors are?  In the haste to get a good grade, people are closing out investigations prematurely with poor root cause analysis.  Without this “good” investigation, the CAPAs developed are not directed to the right things.  So the CAPAs do not solve the problem and the result is that the problem reappears – in other word, we get repeat observations.

A better metric would be a goal of no repeat deviations or discrepancies.  That would indicate the CAPA worked because the investigation was thorough.  With decreased repeats, the work load would decrease giving better opportunity for effort on the unique observations.

2.  Second example

I was visiting a client one day to examine their quality systems, especially deviations and their handling.  I had flown for several hours to visit the company and was met by the plant manager who indicated that the issue that I was there for had been solved and they did not need my services for that.  Since I was already there, and he was paying anyway, I suggested I look at the remediation and maybe other systems in need of help. So off we went.

Apparently, over the last few weeks the PM had had a great idea.  He linked pay for performance to the number of deviations in  the department.  And immediately (for the last two weeks at least), there had been a 20% reduction in deviations.  The first thing I did was to go to the lot disposition department to see how things were and talked to the staff there.  They immediately reported that over the last week or so there had been an increase in the number of batch records arriving with serious errors and deviations that had not been highlighted. Previously, the Production department were encouraged to self report deviations and highlight them to QA. Now it was up to QA to try and find the errors.  Clearly a step backwards.  So the metric of reducing deviations had not decreased the number but rather the reporting of the deviations.  The deviations were still there but not reported.  We all want less deviations but this is not how to get it.

In both these incidents, the metric had driven the wrong behavior.  So how do you set up metrics that work.  I recommend this simple process.

  1. First identify the system that you want to work on. In these cases, the investigation system.
  2. Define the out come you want.
    1. In the first, closure of investigations.  While timeliness is important, surely, getting it right so we have a good chance of an effective CAPA to prevent recurrence is the real goal.
    2. In the second case, of course, you want to get no deviations, but if they have occurred, you want them reported, so they can be investigated properly so we can get effective CAPAs so they don’t appear again.
  3. Based on the input of point 2, set up metrics to drive the right behavior.
    1. For example one, you can have a metric of no repeat observations.  That indicates that the investigation was thorough and the CAPA directed to the right thing.  Hence, it solves the problem.
    2. For example 2, we want batch records to arrive in QA – right first time (RFT).  That is completed, checked and all deviations highlighted and put into the system for resolution.

Both these sets of metrics looks in to the future versus simply the immediate.

Are these the only examples or areas.  Of course not, but if you follow these principles, you will get improved operations.  Before any metric is established, ask the questions”will this metric drive the behavior and result I really want?”  And be careful what you ask for.  It might not be what you really want.