The Calcott Consulting Blog:

   Articles on SOPs:


How well has the pharma industry implemented regulations?

February 25th, 2017 by

This has always been a question people ask.  As a consultant i see a slice of the industry that interacts and works with me, so I have my opinions.  However, a company that employs me may not represent the average company.  So a colleague of mine, Peiyi Ko, and I decided to conduct a survey asking these questions.  we used Survey Gizmo as the survey tool and LinkedIn Groups as the interface.  These 40 questions were posed and because of the interesting answers we got, we are publishing them in two parts.  This post has the first part published in Pharmaceuticalonline.  In this part, we examine the implementation of various guidances and regulations.  The second will feature how ell they have impacted the company Quality Management System.

The second part will be published in a couple of months.  Bookmark my website so you will see it when the next is posted.

Happy reading

Good CAPAs require good investigations – driving out human error from the work place

September 4th, 2015 by

Do you have deviations that just won’t go away?  Do they disappear for a couple of months only to reappear a few months later stronger than ever?  Have you ever wondered why?

I won’t try to say that there is one cause for this but one that I have found continuously is Human Error as a major cause of the problem.  The obvious CAPA for this observation is to retrain the operator and rewrite the SOP.  I have seen it often in companies.

When I go in to audit the QMS, this is one area I look for first.  I look at the CAPAs and tally up the retrain operator and the rewrite SOP ones and if I find it happens more than 10% of the time, I know where to focus.  Investigations are not thorough or in depth.

On a recent visit to a client, they had just exited from a grueling inspection by an agency with a total of 53 observations.  They had developed their responses which are really CAPAs in any other language.  There were a total of 135 CAPAs and 55% were “retrain operator” or “rewrite SOP”.  I asked the person in charge of getting the responses back and asked whether he thought it would correct the problem.  His reply was an honest “No, many of these problems we’ve seen before and tried that and it never works”.  So, I ask him why are you doing it again, remembering Einstein’s quote “insanity is doing the same thing again and expecting a different result”.  At least I think it’s Einstein.

I then asked, “Why did you do this when you know it will not work?”  The answer was a chilling “because the regulators will accept it and it will allow me to get back to making product!”  I wanted to say “Yes, in the same broken way as before”.

After talking with other people at the plant it became clear that their investigation skills and tools were deficient.  They could get to the human error point.  They recognized that somebody had made an error but they lacked the tools to investigate further as to why it occurred.  Was it a system that was defective, were the operators overloaded with work, were the work stations organized appropriately?

Very rarely is Human Error the root cause but rather Human Error is the symptom of things not functioning properly. It is often the result of a deficiency elsewhere.  No other industry accepts human error just our pharmaceutical one.

There are four main ways that that things fail and manifest as human errors. There has been plenty of research in this area but not in pharmaceuticals.

  1. Culture – the culture allowing or encouraging people to not follow procedures. Initially it may be innocent. “I know the procedure is wrong, but we cannot change it”. “I know it says that, but this short cut will get you there quicker and we have a goal to get on the shift” – implying doing it by the rules won’t let you get there on time. This often occurs in cultures that are volume driven. “Get the product out the door at all costs”. One company even had a goal of “Speed over perfection”. That was a difficult one to puzzle out.
  2. Systems – the systems in place just don’t work. The poor operator just can’t get it to work based on the equipment, layout, QMS in place. This can be a change control process that is no longer serving the plant, a deviation process that everybody avoids, a CAPA process that is viewed as a encumbrance. The systems are viewed as punishment. Gone has the goal that the systems are tools to help you stay on top of your processes. The goal of everybody is to move the paper to the next person as quickly as possible. These systems are often put in place at the Corporate level or by management without asking the practitioners how it should be done. The same occurs in lay out of equipment and processes. They are not designed with the user in mind but often by a corporate engineer in central headquarters. Often operators have to take readings, remember them and go to the other end of the room or even next door to log the data into the permanent record.  This also breeds scrap paper use for data transcription.
  3. Procedures – this is where the procedures, batch records are wrong. It is often because they were not written by users but rather other bodies.                                                                                                            In one case I came across batch records being filled out and there appeared to be lots of   extraneous comments.     I asked about them and queried why.  “The batch records are wrong” was the response.  “Why don’t you change them?” I questioned.  “We can’t. We are not allowed to “ was the answer.  “Why can’t you?”, I asked. “Because Process Science owns them. They write them for us”  “If they write them, do they have the same equipment as you operating at a similar scale”.  “No”.     No wonder they are wrong. Nobody consulted the actual people operating the equipment.
  4. People – which brings us to the people. They make mistakes for many reasons. The reasons are many and varied. It could be that the training was inadequate. Yes they know what to do but not why. They do not understand the criticality of the key steps. They don’t know why it must be done this way. They are rushed and in the rush they revert to the old way.

As you read through the examples above, you will see a lot of similarity and overlap and that is because, it is rarely only one of the above that fails.  It actually starts at the management level with culture.  They set the tone that creates the systems, procedures and people to fail.

So how can you detect that you have a culture problem?  A systems problem?  Poor procedures? Or people prone to human errors?

You know you have problem or potentially one if you can say yes to any of the following:

Culture Problems include

  • Silos with poor communication between
  • Box checking mentality
  • Firefighting all day and everyday
  • Deadlines that are never met
  • Deviations are considered bad and people are punished
  • A blame culture that never learns from mistakes
  • People unwilling to raise the alarm for fear of punishment
  • Heavy reliance on final check – QA will catch it
  • Metrics that drive the wrong behaviour

System problems are evident by these

  • Congestion on the plant floor
  • Poor work flows
  • Physical conditions not adequate – lighting, temperature, humidity
  • Unreliable equipment that breaks down
  • Poor ergonomics of operators
  • Electronic systems that are no longer tools serving the worker
  • People avoiding using e-systems
  • Don’t know the cost of non-conformance
  • User not involved in design

Procedure problems are exemplified by

  • Long complex SOPs – they read like War and Peace
  • Too many double or even triple checks
  • SOPs written to satisfy auditors or regulators not for operators
  • CAPAs never working
  • All investigations / CAPAs must be complete in 30 days
  • Everything given equal importance

Which brings us to People

  • No time for breaks
  • Workers operating on permanent overtime
  • Multitasking that does not work
  • Reliance on memory because documents and records aren’t where the work is
  • Training explains the how but not the why. No understanding of the consequences of the procedure
  • Too many distractions
  • Training in a class room versus where the job is done

Of course this is not a comprehensive list but you get the picture!!

To get rid of all these types of problem requires both management and workers to take responsibility.  We must move to a culture exemplified by

  1. Create a positive attitude to human error. People make mistakes because the systems supporting them fail them. The error is pointing out an opportunity for improvement.
  2. Create a blame free culture. We must have a freedom to speak out where we see an issue and focus on the event and not who did it. Ask the question “what went wrong?” rather than “what did you do wrong?”
  3. Drive out complexity. Complexity is the enemy, it makes SOPs unworkable and an unworkable SOP translates to error. Reward simple procedures.
  4. Use user centered design. Get the workers involved in writing SOP, Batch Records, designing equipment and process layout. You will create ownership and it will be right.
  5. Introduce safeguards into processes, equipment. The operator knows what can go wrong, what they do under stress. Ask for their advice.
  6. Have a rapid reporting system. A system that responds in hours, not days, week or never.
  7. Seek out and remove risk. The workforce is your best friend to identify what or could go wrong. They see it every day.
  8. Create education and not training programs. Education explains how and what but also why it is done this way. It includes what can go wrong if steps are done wrong. Tells you what the critical steps are and why the new is better than the old.
  9. Management must assign roles and responsibilities and hold people accountable. Ownership and pride in the job well done ensues.

The road is a long one fraught with challenges but as my father used to wisely tell me “Knowing you have a problem is 50% of the way to solving it”.  How true!!!!

QA and Manufacturing should be working on the same team

May 15th, 2015 by

During the last couple of months I have been working with a company that is undergoing a transition as products move through their pipeline – quite a radical shift in strategy.  Over the years, their Quality Management System (QMS) has evolved.  But as with all evolution, the small incremental changes that occur might lead along a path to an improved system or an evolutionary dead end.  With this radical shift, they invited me to come in and assess what they are doing right, what they are doing badly and what, perhaps, they are not doing.

So the first step is a gap analysis which entails reading documents and records and interviewing people.  Fortunately, management is highly supportive of this exercise so I got extreme cooperation from all parties.  Documents (SOPs policies, quality manuals and quality plans) were supplied to illustrate the systems.  In addition, I requested records which were supplied readily.  These included deviation investigations, CAPAs and change controls, to name just three types.  While I cannot look at all, I asked that typical examples be shown.  I want to see the average type record – neither the best nor the worst.  During discussions, which ranged from 1 on 1s to small groups, people felt comfortable describing what they liked and what they found frustrating.  The interesting feature I noted was that the same frustrations were felt universally across the organization.  It did not matter whether it was QA, QC, engineering or production: all felt the same.

The analysis revealed that the three systems described were all weak.  While the company used an electronic system to track and manage the system, the end result, the reports, did not meet the requirements of either the practitioners or me.  That is, they were neither well written nor clear.  They lacked the clear logic to explain what had happened and why for deviations, how the CAPAs linked back to the deviations and the rationale for the change in the change controls.  And why was this?  In a way the people involved had treated their systems as hurdles that had to be overcome to move to the next step, rather than the tools they should be.  With the e-system, it appeared the goal was to move the document from their inbox to next person’s inbox.

They had lost sight that an investigation’s goal should be to:

  • Uncover what went wrong, identifying the factors that contributed to the issue (they got some of the factors but not all).
  • Recover the material or data (they were good at that)
  • Prevent it from happening again (not good at that)
  • Prevent similar things from happening elsewhere (almost totally absent)

In this case, the e-system had done a grave disservice to the company.  It had taken the thinking out of the process; it had subverted the process from serving the operations to a system that had to be served.  It had also driven behavior into short term thinking versus long term.  They had cultivated a fire-fighting mentality.

The exact same thing had happened in the change control system.  There was not a critical evaluation of any of the changes.  The changes were made to save materials rather than build a rugged process or system.

To combat this systemic problem (and all three areas were linked), we embarked on a training session.  It included three elements

  1. Back to basics of the three systems – explaining what the desired state was and how to get there.
  2. Teamworks which had real life examples of challenges using some of the principles and skills learnt.
  3. Practical examples from their company. I selected examples of deviations/investigations and CAPAs and change controls and worked through with the group some examples. After I gave my read on the situations, I let them loose to try out the new skills themselves.

During the second round, there was a breakthrough by one of the staff members.  Suddenly, the light bulb went off.  I have never seen such an excited person in all my years training.  I am going back in a couple of weeks to see how things are going.  I am quite optimistic.  I will keep you posted.

Effective auditing – one size does not fit all

April 15th, 2015 by

Every company I work with has a problem with their auditing program.  Some believe they are overdoing it and others feel they are not doing enough.  In a sense, they are both right.  In actual fact, they are all overdoing it in certain areas and underdoing in others.

Back when I started in Industry, and I am afraid to tell you exactly when, auditing programs were written in stone in SOPs.  The frequency, duration and number of people were defined and rigorously enforced.  Audits were conducted each the same way and lists of findings were assembled and sent to the auditee.  If lucky, the findings were responded to and CAPAs developed.  The report was closed out and filed.  After the prescribed period of time, the process was repeated.  Often the same findings were seen at the next audit.  So either the CAPA was not done or it was ineffective.  Not exactly an efficient, effective process, but it satisfied the regulators.

Today a program like that is just not acceptable.  Why?  Have the regulations changed? Have our expectations changed?  Has the world changed?  The answer to each is yes.

Over the last 20-30 years we have seen a lot change.  We have seen drug tampering (Tylenol and cyanide), counterfeit drugs in the market place (you get those emails for those drugs at unbelievable prices) and incidents like the Heparin / Baxter problem.  Both industry and regulators have taken note and reacted.  In the US and EU, regulators have recognized the problems and issued new regulations and guidances.  The Falsified Medicines Directive (FMD), Food and Drug Administration Security and Innovation Act (FDAsia) and the Drug Supply Chain Security Act (DSCSA) have been issued and are in the process of implementation.  So how does that fit into the auditing program?  It is because the auditing program is a tool that will enable you to meet the spirit of what these regulations are driving at.

We perform both internal audits of our own operations as well as audits of third parties.  These third parties include our CMOs, our suppliers of raw materials, excipients, actives as well as services such as testing labs, engineering functions and distribution to name a few.  The functions of an audit are many fold including a component of the assessment of whether we care to do business with an entity (the Vendor Qualification Program) as well as as a routine assessment of whether we want to continue to use them (continuous verification) and an assessment after some element has failed (for cause).

Each of these is approached differently, depending on the nature of why we are auditing.

  1. Vendor assessment – usually, you have never worked with the vendor before, or at least recently, so this is an exploratory audit to assure they are operating to an appropriate standard that is compatible with our expectations.  Because of this, the goal is to assess all their systems.
  2. Continuous verification – you have experience with this type of vendor.  You know what they do well and perhaps have identified areas where improvement might be needed.  You are often following up from previous audits or experience with their services (described in the annual product review).  So it is often more directed than the qualification stage of vendor assessment.
  3. For cause – something has definitely gone wrong.  So this is a very directed audit towards the areas of potential deficiency.  The outcome may be to continue to use or to terminate the relationship.

Which brings us to how to conduct an audit to add value.  ICH Q9 is a wonderful guidance that if used intelligently can aid you in developing a truly risk-based auditing program: that is to balance the “too much” versus the “too little”.  I highly recommend integrating this guidance into your auditing program.  Remember if you do not document your risk decisions, you will be found lacking by the regulators.

I use the old moniker of

say what you do,

                    do what you say,

                                          prove it and

                                                           improve it.

Put another way it is really documentsexecutionrecords and continuous improvement.

These following steps may aid you in defining the audit program.

  1. Never schedule your auditors more than 67% of their time and that includes prep time and report writing time.  The extra 1/3 is important for the unexpected such as the for cause audits, the new suppliers, the new emergency programs and also the deep dives you provide as a service to your internal customers.
  2. Determine the risk factor for the particular vendor.  That includes not just the service provided but the track record of each.  This will determine the frequency, duration and manpower needed. And this needs to be kept current because situations change.
  3. You have limited time at the vendor so use it well.  Prepare the outline of what you want to accomplish (the type of audit), what you know, what questions need answering.  If possible do work before you arrive.  That could be sending out a focused questionnaire to relatively simple elements (you can confirm when you arrive).  Even present to them a proposed agenda, so they are prepared and have no excuse when you arrive.
  4. So what do I focus on when I arrive.  A typical process flow approach is my choice.  For actives suppliers or CMOs, I walk the process with my questions and get my answers in situ.  Armed with my preparation work, I walk through the facility and quite prepared to stop even for a significant length of time to explore more if I sense an issue.  For testing labs walk the samples.  This is the execution component
  5. I look at paper work later and I focus on the various quality systems of interest.  I do not read SOPs or policies but rather focus on the records part.  I look at deviations and investigations, CAPAs, change controls and lot dispositions.  The threads I find lead me into the various other systems.  I find these systems are the pulse of the organization and tell you a lot about the company.
  6. If necessary, I go to SOPs and policies.  That is the documentation part to confirm that what they say corresponds to what they do.
  7. I also look at operations and people to detect signs of continuous improvement which is often picked up, not in documents, but in conversation.
  8. I usually look to see evidence of a modern approach to quality as evidenced by an active involvement of management.
  9. In the close out I gather the observations which I have ranked using the EU standard of critical, major and minor.  In the discussion I might even make some suggestions of how improvement might be made.  But it is the company’s decision on the how to address really.
  10. After you get home, make sure to follow up with requests for CAPAs after the agreed upon time frame.

BTW, one of my first stops is the bath room.  Not because of a medical problem but  to see how it is kept.  Companies that have a good QMS have clear bathrooms.  For those with QMS problems, the bathroom can be a telltale.

How to handle metrics that drive the wrong behavior

March 17th, 2015 by

Over my career I have lived and, unfortunately, died by Metrics.  What do I mean by that?  Metrics if developed carefully, and with thought, can help us attain our goals.  However, badly thought out metrics don’t just not help us from attaining our goals, they can actually prevent us from attaining them.  Because they can be counter productive.

Is this new?  No, I have seen it for decades both working in the industry but also as a consultant to the industry.  I find it’s amazing that these bad metrics are not isolated to companies with poor compliance records, but also to the companies who are leaders in doing things right.  Of course, with confidentiality, I can not name names, but I can talk about them and give advise in the public domain.

What are these bad metrics.  They are metrics that have been set up in order to measure and control certain outputs.  However, when set up, they encourage the wrong behavior.  How is that possible?    I will give you two examples.  Now, let me explain.

  1. First example

I was visiting a company recently for a training session on quality systems.  During the break, one of the attendees took me aside and described a situation at his company.  He asked the situation to be kept private.  And by that he meant that not only should I not describe it in  public describing his company but he did not want management at his company to hear about it ascribed to him.  Of course, I honored his request so you will learn neither who the person is or the company.

As with most companies, they wrestle with investigations taking long times to close out which leaves the company vulnerable both operationally and from a compliance perspective.  So to combat that and to drive closure, the company instituted a metric of “All investigations to be closed in 30 days”.  Depending on the number open during the year, the persons performance rating would be impacted.  Performance impacted equals decreased pay raise, bonus etc.  You get the picture.

Result, the number of investigations lingering past 30 days goes down.  Management is content and everything is improved.  Or is it?

The answer is no!!!!!  Yes, investigations are closed out quickly, but are they really completed and accurately describe what the root cause or contributing factors are?  In the haste to get a good grade, people are closing out investigations prematurely with poor root cause analysis.  Without this “good” investigation, the CAPAs developed are not directed to the right things.  So the CAPAs do not solve the problem and the result is that the problem reappears – in other word, we get repeat observations.

A better metric would be a goal of no repeat deviations or discrepancies.  That would indicate the CAPA worked because the investigation was thorough.  With decreased repeats, the work load would decrease giving better opportunity for effort on the unique observations.

2.  Second example

I was visiting a client one day to examine their quality systems, especially deviations and their handling.  I had flown for several hours to visit the company and was met by the plant manager who indicated that the issue that I was there for had been solved and they did not need my services for that.  Since I was already there, and he was paying anyway, I suggested I look at the remediation and maybe other systems in need of help. So off we went.

Apparently, over the last few weeks the PM had had a great idea.  He linked pay for performance to the number of deviations in  the department.  And immediately (for the last two weeks at least), there had been a 20% reduction in deviations.  The first thing I did was to go to the lot disposition department to see how things were and talked to the staff there.  They immediately reported that over the last week or so there had been an increase in the number of batch records arriving with serious errors and deviations that had not been highlighted. Previously, the Production department were encouraged to self report deviations and highlight them to QA. Now it was up to QA to try and find the errors.  Clearly a step backwards.  So the metric of reducing deviations had not decreased the number but rather the reporting of the deviations.  The deviations were still there but not reported.  We all want less deviations but this is not how to get it.

In both these incidents, the metric had driven the wrong behavior.  So how do you set up metrics that work.  I recommend this simple process.

  1. First identify the system that you want to work on. In these cases, the investigation system.
  2. Define the out come you want.
    1. In the first, closure of investigations.  While timeliness is important, surely, getting it right so we have a good chance of an effective CAPA to prevent recurrence is the real goal.
    2. In the second case, of course, you want to get no deviations, but if they have occurred, you want them reported, so they can be investigated properly so we can get effective CAPAs so they don’t appear again.
  3. Based on the input of point 2, set up metrics to drive the right behavior.
    1. For example one, you can have a metric of no repeat observations.  That indicates that the investigation was thorough and the CAPA directed to the right thing.  Hence, it solves the problem.
    2. For example 2, we want batch records to arrive in QA – right first time (RFT).  That is completed, checked and all deviations highlighted and put into the system for resolution.

Both these sets of metrics looks in to the future versus simply the immediate.

Are these the only examples or areas.  Of course not, but if you follow these principles, you will get improved operations.  Before any metric is established, ask the questions”will this metric drive the behavior and result I really want?”  And be careful what you ask for.  It might not be what you really want.