The Calcott Consulting Blog:

   Articles on SOPs:

Effective auditing – one size does not fit all

April 15th, 2015 by

Every company I work with has a problem with their auditing program.  Some believe they are overdoing it and others feel they are not doing enough.  In a sense, they are both right.  In actual fact, they are all overdoing it in certain areas and underdoing in others.

Back when I started in Industry, and I am afraid to tell you exactly when, auditing programs were written in stone in SOPs.  The frequency, duration and number of people were defined and rigorously enforced.  Audits were conducted each the same way and lists of findings were assembled and sent to the auditee.  If lucky, the findings were responded to and CAPAs developed.  The report was closed out and filed.  After the prescribed period of time, the process was repeated.  Often the same findings were seen at the next audit.  So either the CAPA was not done or it was ineffective.  Not exactly an efficient, effective process, but it satisfied the regulators.

Today a program like that is just not acceptable.  Why?  Have the regulations changed? Have our expectations changed?  Has the world changed?  The answer to each is yes.

Over the last 20-30 years we have seen a lot change.  We have seen drug tampering (Tylenol and cyanide), counterfeit drugs in the market place (you get those emails for those drugs at unbelievable prices) and incidents like the Heparin / Baxter problem.  Both industry and regulators have taken note and reacted.  In the US and EU, regulators have recognized the problems and issued new regulations and guidances.  The Falsified Medicines Directive (FMD), Food and Drug Administration Security and Innovation Act (FDAsia) and the Drug Supply Chain Security Act (DSCSA) have been issued and are in the process of implementation.  So how does that fit into the auditing program?  It is because the auditing program is a tool that will enable you to meet the spirit of what these regulations are driving at.

We perform both internal audits of our own operations as well as audits of third parties.  These third parties include our CMOs, our suppliers of raw materials, excipients, actives as well as services such as testing labs, engineering functions and distribution to name a few.  The functions of an audit are many fold including a component of the assessment of whether we care to do business with an entity (the Vendor Qualification Program) as well as as a routine assessment of whether we want to continue to use them (continuous verification) and an assessment after some element has failed (for cause).

Each of these is approached differently, depending on the nature of why we are auditing.

  1. Vendor assessment – usually, you have never worked with the vendor before, or at least recently, so this is an exploratory audit to assure they are operating to an appropriate standard that is compatible with our expectations.  Because of this, the goal is to assess all their systems.
  2. Continuous verification – you have experience with this type of vendor.  You know what they do well and perhaps have identified areas where improvement might be needed.  You are often following up from previous audits or experience with their services (described in the annual product review).  So it is often more directed than the qualification stage of vendor assessment.
  3. For cause – something has definitely gone wrong.  So this is a very directed audit towards the areas of potential deficiency.  The outcome may be to continue to use or to terminate the relationship.

Which brings us to how to conduct an audit to add value.  ICH Q9 is a wonderful guidance that if used intelligently can aid you in developing a truly risk-based auditing program: that is to balance the “too much” versus the “too little”.  I highly recommend integrating this guidance into your auditing program.  Remember if you do not document your risk decisions, you will be found lacking by the regulators.

I use the old moniker of

say what you do,

                    do what you say,

                                          prove it and

                                                           improve it.

Put another way it is really documentsexecutionrecords and continuous improvement.

These following steps may aid you in defining the audit program.

  1. Never schedule your auditors more than 67% of their time and that includes prep time and report writing time.  The extra 1/3 is important for the unexpected such as the for cause audits, the new suppliers, the new emergency programs and also the deep dives you provide as a service to your internal customers.
  2. Determine the risk factor for the particular vendor.  That includes not just the service provided but the track record of each.  This will determine the frequency, duration and manpower needed. And this needs to be kept current because situations change.
  3. You have limited time at the vendor so use it well.  Prepare the outline of what you want to accomplish (the type of audit), what you know, what questions need answering.  If possible do work before you arrive.  That could be sending out a focused questionnaire to relatively simple elements (you can confirm when you arrive).  Even present to them a proposed agenda, so they are prepared and have no excuse when you arrive.
  4. So what do I focus on when I arrive.  A typical process flow approach is my choice.  For actives suppliers or CMOs, I walk the process with my questions and get my answers in situ.  Armed with my preparation work, I walk through the facility and quite prepared to stop even for a significant length of time to explore more if I sense an issue.  For testing labs walk the samples.  This is the execution component
  5. I look at paper work later and I focus on the various quality systems of interest.  I do not read SOPs or policies but rather focus on the records part.  I look at deviations and investigations, CAPAs, change controls and lot dispositions.  The threads I find lead me into the various other systems.  I find these systems are the pulse of the organization and tell you a lot about the company.
  6. If necessary, I go to SOPs and policies.  That is the documentation part to confirm that what they say corresponds to what they do.
  7. I also look at operations and people to detect signs of continuous improvement which is often picked up, not in documents, but in conversation.
  8. I usually look to see evidence of a modern approach to quality as evidenced by an active involvement of management.
  9. In the close out I gather the observations which I have ranked using the EU standard of critical, major and minor.  In the discussion I might even make some suggestions of how improvement might be made.  But it is the company’s decision on the how to address really.
  10. After you get home, make sure to follow up with requests for CAPAs after the agreed upon time frame.

BTW, one of my first stops is the bath room.  Not because of a medical problem but  to see how it is kept.  Companies that have a good QMS have clear bathrooms.  For those with QMS problems, the bathroom can be a telltale.

How to handle metrics that drive the wrong behavior

March 17th, 2015 by

Over my career I have lived and, unfortunately, died by Metrics.  What do I mean by that?  Metrics if developed carefully, and with thought, can help us attain our goals.  However, badly thought out metrics don’t just not help us from attaining our goals, they can actually prevent us from attaining them.  Because they can be counter productive.

Is this new?  No, I have seen it for decades both working in the industry but also as a consultant to the industry.  I find it’s amazing that these bad metrics are not isolated to companies with poor compliance records, but also to the companies who are leaders in doing things right.  Of course, with confidentiality, I can not name names, but I can talk about them and give advise in the public domain.

What are these bad metrics.  They are metrics that have been set up in order to measure and control certain outputs.  However, when set up, they encourage the wrong behavior.  How is that possible?    I will give you two examples.  Now, let me explain.

  1. First example

I was visiting a company recently for a training session on quality systems.  During the break, one of the attendees took me aside and described a situation at his company.  He asked the situation to be kept private.  And by that he meant that not only should I not describe it in  public describing his company but he did not want management at his company to hear about it ascribed to him.  Of course, I honored his request so you will learn neither who the person is or the company.

As with most companies, they wrestle with investigations taking long times to close out which leaves the company vulnerable both operationally and from a compliance perspective.  So to combat that and to drive closure, the company instituted a metric of “All investigations to be closed in 30 days”.  Depending on the number open during the year, the persons performance rating would be impacted.  Performance impacted equals decreased pay raise, bonus etc.  You get the picture.

Result, the number of investigations lingering past 30 days goes down.  Management is content and everything is improved.  Or is it?

The answer is no!!!!!  Yes, investigations are closed out quickly, but are they really completed and accurately describe what the root cause or contributing factors are?  In the haste to get a good grade, people are closing out investigations prematurely with poor root cause analysis.  Without this “good” investigation, the CAPAs developed are not directed to the right things.  So the CAPAs do not solve the problem and the result is that the problem reappears – in other word, we get repeat observations.

A better metric would be a goal of no repeat deviations or discrepancies.  That would indicate the CAPA worked because the investigation was thorough.  With decreased repeats, the work load would decrease giving better opportunity for effort on the unique observations.

2.  Second example

I was visiting a client one day to examine their quality systems, especially deviations and their handling.  I had flown for several hours to visit the company and was met by the plant manager who indicated that the issue that I was there for had been solved and they did not need my services for that.  Since I was already there, and he was paying anyway, I suggested I look at the remediation and maybe other systems in need of help. So off we went.

Apparently, over the last few weeks the PM had had a great idea.  He linked pay for performance to the number of deviations in  the department.  And immediately (for the last two weeks at least), there had been a 20% reduction in deviations.  The first thing I did was to go to the lot disposition department to see how things were and talked to the staff there.  They immediately reported that over the last week or so there had been an increase in the number of batch records arriving with serious errors and deviations that had not been highlighted. Previously, the Production department were encouraged to self report deviations and highlight them to QA. Now it was up to QA to try and find the errors.  Clearly a step backwards.  So the metric of reducing deviations had not decreased the number but rather the reporting of the deviations.  The deviations were still there but not reported.  We all want less deviations but this is not how to get it.

In both these incidents, the metric had driven the wrong behavior.  So how do you set up metrics that work.  I recommend this simple process.

  1. First identify the system that you want to work on. In these cases, the investigation system.
  2. Define the out come you want.
    1. In the first, closure of investigations.  While timeliness is important, surely, getting it right so we have a good chance of an effective CAPA to prevent recurrence is the real goal.
    2. In the second case, of course, you want to get no deviations, but if they have occurred, you want them reported, so they can be investigated properly so we can get effective CAPAs so they don’t appear again.
  3. Based on the input of point 2, set up metrics to drive the right behavior.
    1. For example one, you can have a metric of no repeat observations.  That indicates that the investigation was thorough and the CAPA directed to the right thing.  Hence, it solves the problem.
    2. For example 2, we want batch records to arrive in QA – right first time (RFT).  That is completed, checked and all deviations highlighted and put into the system for resolution.

Both these sets of metrics looks in to the future versus simply the immediate.

Are these the only examples or areas.  Of course not, but if you follow these principles, you will get improved operations.  Before any metric is established, ask the questions”will this metric drive the behavior and result I really want?”  And be careful what you ask for.  It might not be what you really want.

Tools that make you work for them rather than working for you

October 5th, 2014 by

I am often called into companies to identify opportunities for improvement in processes, products and also the Quality Management System as well. These projects give an opportunity to learn how an organisation ticks, its strengths and its weaknesses. Often management initiates the process by giving me their list of systems that they believe are broken or at least not operating as efficiently as they would like. But not always.  They want these worked on but sometimes are reluctant to consider the systems they consider working well.  Or maybe these “working systems” are the ones without the loud squeaky wheel attached.  That is, the one flying under the radar and silently deficient.

I usually take these lists and negotiate with the company that I should look broader than those that they have identified.  That is not to create billable hours but rather because of efficiency. While they may have identified some of the deficient systems, they may not have all.  Also it may be that the ones they consider working well are in fact not working well at all or they may have a system with overkill in place.  Now, if there are some very good systems, you can learn a lot about an organisation if you can understand why some are good and others not.  What causes this?  it can be an uneven management or a pocket of progressive people who are 100% dedicated in the face of adversity.

The key is to bring me in once to identify all the opportunities rather than bring me in twice.  It costs more.  It adds months to the timeline if I have to do it twice.

So what do I usually find?  First, management often does have a good perspective of the “bad” systems.  Often they have identified the very painful ones – the ones in dire need of improvement.  But not always.  This has to be teased out by careful interview of process owners and stakeholders and users alike.  In the interview, it is critical that it is the system that is examined and not the person who runs it.  These people are trying their hardest (in most cases), its just the tools, resources and environment that prevents them from running a successful process.  In 90% of the time it’s not people but the environment they are working in that causes the problem.  Most of these organisations are working at 100 mph with compulsory overtime needed to just get the basics done.  There is no time for future thinking, the fires are burning out of control and there are just not enough firemen to keep the place from burning down to the ground.

What I also find are organisations that are reluctant to change.  The last time somebody took a risk and it failed, they were punished.  You can bet the next time they don’t risk going out on a limb.  When I talk with these folk, I often find processes that are unwieldy, overly complex.  It’s not uncommon to see SOPs of 40, 50, 60 pages long.  No wonder there is a deviation at every turn.  These deviations then flood the investigation system.  With a 30 day to complete the investigation and a backlog, what happens next.  Well, they wait to day 28 or 29 because they are working on other things and are caught between a rock and a hard place.  Got to get it completed or I miss my goal of 30 days.  Get it signed off and off my desk and get the CAPA in place.  And what is the CAPA?  It’s usually retrain operator or rewrite SOP.  These are easy to think up and everybody is familiar with these.

And do you really believe that these two CAPAs are going to work and prevent a recurrence?  Absolutely not.  I was auditing a company recently and over 80% of CAPAs were retrain operator or rewrite SOP and that was for a set of recurring issues that did not go away over a 4 year period.  Complacency had set in.  Are we really that bad at training and writing that it does not solve the problem?  Or is the CAPA not directed at the right thing?  I think we all know the answer.

So what is my job?  First to look at systems and get them into three categories.

  1. Ones that are clearly deficient – these need major surgery.  Or they may not even be in place.
  2. Ones that are overkill – look for ways to back off what is being done
  3. Ones that meet the need – they are adequate although they may not be world class or one that you can proudly say are a 10/10 system.  At this stage, if it works at the right level, leave it. It does not have to be perfect.

Category 1 and 2 need the major work. These systems are really tools in a tool box to allow us to operate our business and these tools have lost the vision of what they were intended.  Instead of serving us to help us get the work done, they take on a life of their own.  These tools now control us and make us jump through hoops.

However, as my father once said, “If you can identify a problem, you are 50% on the way to the solution” And that is when the fun begins with getting these overworked people together to look for opportunity to eliminate non-value added work which will free up resources to put on the other Towering Inferno areas.  And its contagious.  Solve a problem once and the next problem is much easier to solve.

It’s these moments I really cherish.  When I see surgery on an overly complex process that brings a breath of fresh air to the people involved, those are the moments I look forward to.  The look in the eyes of the staff is what this job is all about.  And its fun.

Setting sail on a voyage to discovery – creating a culture change in your operations

June 6th, 2014 by

One of the clients of a colleague of mine is undergoing a culture change in their operations from the “old-style” quality to a newer style.  What do I mean by that?  The old style is characterized by the Silo mentality, the us versus them, distrust, Quality that can be characterized as a Dr. No approach.  I think you know what I mean.  You may have worked at a company like that in your career.  Actually you can see them and the results in warning letters posted by FDA if you have never experienced this before.  Although all do not end up with warning letters.  Many operate for years this way.

This will be a long journey involving a lot of small steps.  It is not something that happens overnight.

To embark on this type of transformation requires several elements

  1. A detailed knowledge of how they work and where there is opportunities for improvement.  A simple gap analysis and interviews (not an audit) will give you the answers you need.  The important thing is that you need management support for the change.  In the interviews, you need the employees to open up and speak honestly.  I pledge that management will see the results of the interviews but not who said what.  It will be sanitized (made anonymous). Trust has to be there.
  2. You must have a good interview skill.  If you have gone through Kepner-Tragoe training you are well on the way.  The key is to keep asking why.  If you have kids, you know what that is.  It’s the 6 year olds approach to learning. “Why is xxxx daddy?”  You answer and they respond “Why is YYYY daddy?”
  3. You must listen and think how all the outcomes link back to behavior which then links back to the systems that are not working or are in need of improvement.
  4. And above all Management support for the change.  They have to understand why the old will not sustain them and the new might and they must create a blame-free culture where speaking out is the norm.

It reminds me of the old realty axiom.

Its all about

Location, Location, Location.

here it is

Management, Management, Management


It is these system failures that help you solve things and change culture.  Pick a key system that is not operating well (and everybody knows which ones they are), create a team of owners and customers and start a discussion forum for all to articulate the frustrations.  Don’t let it get to a simple whining event.  List the issues and ask how we might do it differently.  Let each articulate with asking the rest to see if they have contributions.  Facilitation skills are critical at this stage.

These suggestions can then be used by the system owners to revamp the process.  Get the stakeholders in on the review.  The first rendition will not be perfect but I bet it’s better than what was there.

So what is the new Quality style?

Its where Quality is

  • Value added
  • A facilitator 95% of the time
  • Encouraging of partnering
  • where user-centric systems, processes, documents are the norm
  • where team based approaches are encouraged
  • consulted to solve problems.

As you begin the make the changes, it is essential to get to the point where old habits are delearned and new ones embedded.  It is aided when training becomes education and the HOW and the WHAT are alongside the WHY in the training.  People understand why the change has to happen and they buy off on it because they understand why doing it the old way is not as good as the new.  They understand it because they are part of the solution. It was their idea.  They understand the consequences of their actions.  They take ownership.

This is just the beginning.  Watch for more blogs on next steps.  The successes and the set backs.

By the way, the client’s ship has left the harbor.  The captain has charted a new course (and its in the right direction) and the crew are all pulling in the right direction.  Will it be plain sailing?  I doubt it!!  There will be storms and other testing events but the foundations are strong and they are determined.  Tune in to see chapter 2.

So do you see any of the warning signs in your company?  If so, you might want to drop me an email and let’s see what we can do!!!!!!

How is clinical manufacturing different from commercial

September 10th, 2012 by

At a recent webinar presented by Tungsten Shield on the topic of the differences between Clinical Manufacturing and Commercial, one question perplexed me.  It went something like this:

You advocate using less resources to release a commercial lot of material than a clinical lot and you point to Risk Management techniques.  Surely, the commercial lot represents more value to the company and so shouldn’t you spend more resources?

My response went this way.

In the case of a commercial product where you are making the product round the clock, turning out perhaps a 100 lots a year, you expect operations or manufacturing to be highly experienced in making the product.  They have tremendous experience.  They should not be making mistakes.  The QA department are seeing lot packages continuously and know the weak points and strengths of the process, departments testing etc so know where the risks are.  Now the clinical lot, is probably very unique, maybe only made once and never the same twice since the process is changing.  Similarly, the testing is evolving. The documentation is changing and evolving.  So we do not have the history with the material. Add to that the lack of knowledge about the product particularly in early phase.  All of these add up to a higher risk of issues especially to the patient.  While the commercial lots do represent high value to the company, the clinical lots represent the future for the company where errors can result in products not making it through the clinic or causing delays in clinical programs and let’s not forget the impact on patients.

The questioner seemed fine with the response.