Models, Related Risks, and Controls

JUNE 2014 ANNUAL MEETING – VANCOUVER (SESSION 19)
1
Session 19:
Models, Related Risks, and Controls
Séance 19 :
Les modèles et les risques et contrôles connexes
MODERATOR/
MODÉRATEUR :
Jean-Yves Rioux
SPEAKERS/
CONFÉRENCIERS :
Sylvain Fortier
Trevor Howes
Warren Rodericks
?? = Inaudible/Indecipherable
ph = phonetic
Unknown = Unidentified speaker
Moderator Jean-Yves Rioux: Good afternoon, everyone. In the interest of the topic being fairly
broad, we’re going to start on time because we have quite a bit of material that the speakers
would like to cover. I cannot imagine a month without being prompted to change my passwords
on the internet these days. Technology has become very complex and so are the models that
we’re using. Although companies would like me to think that these controls or actions are really
preventive they are more of a corrective nature, in fact. So risk management is now, and controls
and governance have increased in importance. And these practices are essential to minimizing
the underlying risk in models. In this session you will hear about best practices in different
industries and from different perspectives.
The first speaker, Trevor Howes, is a VP and actuary at GGY; you may have heard of this
software. He’s part of the marketing team. He previously worked as an Appointed Actuary for
life insurance and reinsurance companies, as well as a consulting actuary. He frequently speaks
about the topic of modelling, including model efficiency techniques and best practices. He’s
been recently leading the organizing committee seeking to form a new section of the Society of
Actuaries that will support members interested in modelling issues in general.
The second speaker—a colleague of mine—Warren Rodericks is a senior manager in the
insurance and actuarial solutions at Deloitte. Warren has over 13 years of experience in a variety
of roles in consulting in life insurance companies as well as in actuarial software companies. He
has a fairly long history of designing, building, and testing actuarial models and he’s used
several platforms including Axis, VIPitech, ReMetrica, and Prophet. He’s also experienced in
tax, mergers and acquisitions, due diligence, and audit work as well. Warren will be providing a
bit of the view of the auditors and the reviewer, the internal and external. He has a very similar
perspective, so he’s going to provide that perspective.
The third speaker, Sylvain Fortier, is vice-president, internal ratings and credit risk management,
at Laurentian Bank. He oversees all activities relating to modelling and to credit and risk
management systems. Prior to this position, he occupied various positions related to credit risk.
PROCEEDINGS OF THE CANADIAN INSTITUTE OF ACTUARIES
Vol. 45, June 2014
2
ASSEMBLÉE GÉNÉRALE ANNUELLE – JUIN 2014, VANCOUVER (SÉANCE 19)
He has more than 15 years of experience, he has an actuarial and mathematics degree, and he has
a Master of Business Administration, including a degree, a major in finance and insurance. He’s
also a chartered enterprise risk analyst. Sylvain comes from the banking industry and he will
provide that perspective to you. So I would like to invite the first speaker, Trevor, to come to the
podium.
(Applause)
Speaker Trevor Howes: Models are of increasing interest to everybody and I do thank you for
coming and welcome you to this session. Thank you for your interest. Just as a parenthetical
remark before starting, I went through the exhibit area today and I was delighted to find that one
of our consulting firms is operating a new modelling engine for your consideration. It’s called
the magic answer ball and it’s a tried and true technology, it hardly ever breaks, I’m sure. One
advantage is that it’s really stochastic; it’s a random chance of what the answer is going to come
up. And that’s similar to our own product, Axis, that we’re the vendor for, my firm. Unlike our
firm, this one is free.
So in terms of models, one has to start a little bit with an environmental scan and I won’t spend a
lot of time on this. A lot of stresses on actuarial software, actuarial models, and a lot of changes
especially coming forward and into the future as models are taking on more and more use.
Perhaps in Canada we were a little advanced in that in the way of our approach to a principlesbased approach, integrated asset liability approach, well before other parts of the world. But
they’re all catching up in that sense and they’re all feeling the stress and the strain on their
models and on the practices in using them.
The external stakeholders, whether they’re regulators or professional bodies, etc., the internal
stakeholders of the board and the public in which you serve are all pressing for more quality
models because they want better-quality answers with more reliability. And of course the
economic environment is a stress, volatility is constant out there, you can never tell what risks
are changing. New risks are evolving and the products are constantly changing, too. So it’s a
challenge; models have to adapt to that.
They are adapting but, you know, perhaps slowly but there’s a great pressure to adapt in the top
two squares there. They have to adapt to become more real, to be more realistic, to be more
accurate, to be higher quality models, to have better granularity in the results that they will
produce. Whereas in the past, reserves, for example, and especially in the United States, would
have been more liability driven. Today, they must be asset liability driven as we move forward
into a principles-based approach.
Instead of looking at risks independently there’s a demand to look at them holistically, the whole
product, and to integrate the various risks into models that are combined, and to remove the
shortcuts that perhaps impaired the quality and the accuracy of the models within them. At the
same time the models have to be more flexible; they have to be used for more applications
because the era of using completely separate models for different purposes and the disadvantages
that that offers is coming home to roost.
Models have not only a production use but they have a development and a research use and that
poses extra problems on the way in which you manage those models and how they can be used.
In addition, of course the new methodologies do require constant changes to adapt to regulatory
Vol. 45, juin 2014
DÉLIBÉRATIONS DE L’INSTITUT CANADIEN DES ACTUAIRES
JUNE 2014 ANNUAL MEETING – VANCOUVER (SESSION 19)
3
frameworks that are not fixed yet and are evolving as well as the environment that’s changing
and assumptions which must change with them.
So those two stresses, because models are becoming more and more important for financial
results and for risk management and risk analysis within companies, it’s making models that
much more critical and highlighting the areas in which they need to improve. One area they need
to improve is reducing their expense because they are costly; they take a large amount of
hardware as you get more and more calculations have to be performed. And so there’s a pressure
for those models to change to use technology more efficiently, to use brand new technology as it
evolves and to consider model efficiency techniques that have a measurable and acceptable
impact on the model results.
But that in itself also poses a stress on the models. The increased realism, the constant change in
flexibility, the adaptation of model efficiency techniques all mean the models are changing and
are subject to model risk. And so the actuaries and the controllers of the models have to be sure
that they have built in sufficient run-time reliability that they can support auditability concerns
and transparency and report with confidence to management and basically address and
understand the risk within their models and hope to mitigate and control that risk.
That’s the main topic of the presentation today: model risk in general. It’s too large an area to get
through into greater detail and Warren will talk a little bit about one element of the model risk
and controls when he follows me.
When we’re talking about model risk I want first to say that the interest in the model risk and in
the standards which are applied to the use of models has been increasing over the recent years
throughout the actuarial profession, and so I’m going to start from the greatest distance
internationally and talk about the International Actuarial Association. In 2010, they understood
the growing importance of stochastic models and introduced a paper called Stochastic
Modelling – Theory and Reality from an Actuarial Perspective. Of course the design and
implementation of models as well as the governance of those models were key points within that
paper.
In 2010, they also produced a note on the use of internal models for risk and capital management
purposes by insurers. And while early approaches to capital were certainly more on a standard
approach, stress testing approach and factor approach, more companies and certainly in Europe
are moving to internal models, a more demanding perhaps more realistic capital measurement,
hopefully lower, to satisfy Solvency II types of demands.
In April of this year, in fact, there’s been a draft statement of intent. The first two papers were
educational notes, just to raise the issues and to inform people, but in 2014 they have produced a
statement of intent regarding a standard of practice, international standard of practice, entitled
Insurer Enterprise Risk Models. And so we’ll look for some further information there as that
standard evolves in terms of how it applies to actuaries and the way in which they use their
models. This standard issue is one that I want to come back to so I’m going to move a little
closer to home to the society and to the Actuarial Standards Board in the U.S. at the moment.
There was a very important and very useful paper produced just over a year ago in December
2012, a research report actually based on a survey of U.S. and Canadian users, actuaries and
companies, and the title was Actuarial Modelling Controls: A Survey of Actuarial Modelling
Controls in the Context of a Model-Based Valuation Framework. Now those last words are key
PROCEEDINGS OF THE CANADIAN INSTITUTE OF ACTUARIES
Vol. 45, June 2014
4
ASSEMBLÉE GÉNÉRALE ANNUELLE – JUIN 2014, VANCOUVER (SÉANCE 19)
because it’s with the advent of PBA (principles-based approach) that valuation was going to be
based on flexible dynamic models and not on more traditional production systems. And so the
topics in this paper were very interesting and I think very helpful as we go forward.
There is also, as was mentioned in the introductory remarks, I have led an organizing committee
to petition the society to create a new section, and the section is going to be called the Modelling
Section. The goal there is to try to find a good natural home for those who are interested in
modelling, per se; you can think of all the different aspects of modelling, it’s not just developing
standards of practice or not even that. It’s more an education and a research context in the
design, the specifications, the implementation, the testing, the validation, the use of and the
documentation of, and the efficiency of models. Anything related to models in general we think
would be a topic of interest to our group. And especially we are talking here about models that
are used for life and annuity business, health, long-term care business, perhaps, reporting is
going to be our initial focus but it could well broaden out into other types of models. So look for
an announcement on that within a month because it’s going to be approved hopefully over the
next weekend.
That’s an educational and research focus. The other side is standards of practice and that’s where
the Actuarial Standards Board comes in of the two countries. In the United States last year in
June, they released an exposure draft on modelling and so I’m not going to go in detail on that. It
is a little bit relevant to a Canadian initiative that started as well, which I’ll talk about in a
second.
In the Canadian side, of course, again you can look to the regulators and you can look to the
profession for growing areas of interest, levels of interest in modelling. It started back in 2008
for OSFI because they produced, in tandem with the industry and with the profession, a
document on the Canadian vision for life insurers’ solvency assessment. And that was, in large
part, a similar approach to Solvency II as it was evolving in Europe at that time. In 2010, they
produced a guidance note, Guidance for the Development of a Models-Based Solvency
Framework. Again, models are highlighted as being key here for Canadian companies. And in
March 2012, they introduced a regulation on the use of internal models for determining required
capital for segregated fund risks, in particular. Regulator interest, of course, is on the governance
and the control of models and how they’re being used and on the reliability and a very natural
concern and one that we should share as actuaries.
From an actuarial professional viewpoint, there was a report back at the same time, 2008, as the
vision statement on the left and that was a report called Risk Assessment Models. I worked in the
group that produced that one. That was six years ago; it doesn’t seem that long. That document,
like all the others, is available if anyone wants to refer to it. Some of them are quite lengthy. I
can easily send people copies if they’re having trouble finding them—just send me an e-mail or
come and get my card after the session. I’d be happy to help you out.
In February of 2013, there was a new standard of practice introduced called Relating to
Appointed Actuary Opinions with Respect to the Use of Internal Models to Determine Required
Capital for Segregated Fund Guarantees. Very narrow focus there on seg funds, but that was an
area that was of emerging interest and need because that’s where we were really using these
complex models and have the abilities and for the first time from the stochastic point of view, or
at least it’s an area certainly of public interest.
Vol. 45, juin 2014
DÉLIBÉRATIONS DE L’INSTITUT CANADIEN DES ACTUAIRES
JUNE 2014 ANNUAL MEETING – VANCOUVER (SESSION 19)
5
Not long after that, we introduced another notice of intent about the desire to produce a standard
of practice on modelling in general. It was not the first notice of intent; we released one several
years before and that one didn’t go anywhere in the sense that it got comments and there seemed
to be disagreement as to whether we really needed a standard and what its focus would be. But
instead of disappearing, the Actuarial Standards Board reformed the designated group and
launched a new notice of intent. And so we do think that there will be a standard coming out in
the not too distant future on modelling.
This proposed Canadian standard is talking about how the actuary should acquire adequate
knowledge and understanding of its model and of its limitations and risks. And it’s clear that no
matter how complex and broad and detailed this model is, the actuary has to have some
confidence. Is the model suitable and appropriate for its intended use? Has he adopted
appropriate strategies for model risk mitigation?
Now, in the Canadian discussions there has apparently been some debate on whether the
Standards of Practice should be concerned with the construction of the model at all or just its use.
To me, you can’t separate the two, the design, the construction, the implementation of the model
from its use and the actuary has to be concerned with all aspects of it. But still, there may be
questions of what goes into the wording. So if you want to hear more about that, there is a
session tomorrow, 32, of the Actuarial Standards Board and Bob Howard will be talking about
this new standard of practice.
I believe that it’s not just standards that the actuary should be concerned with. It’s not just
reducing risk either or improving control; it’s also improving the flexibility and efficiency and
the maintenance and operation of his models, because they’re so key to his life and to his
everyday work. The companies need the actuaries to be more efficient, to work together and not
only with less risk but with more quality and with more efficiency. And I think that there is an
opportunity here to engage in actuarial transformation by improving the way in which models are
designed and used within companies.
Now, mitigating model risk is the key focus though, and one I want to talk a little bit more about.
What is model risk? Well, I won’t go into a definition; you can see the Standards of Practice for
them, they’re very general. But in general, model risk is a term referring to the possibility of
error or loss resulting from the use of models. You can imagine all sorts of circumstances when
that might happen. If you think about error, well, that’s obvious; I mean we’re talking about a
reporting or a calculation error where a model really didn’t produce the results that it should have
if it was done correctly. But there are also strategic errors because if results don’t come out in
sufficient detail or in sufficient accuracy from models, they can lead to wrong decisions or no
decisions in some cases, which may be just as bad. And so strategic errors from a lack of
information or from wrong information are also big risks.
Now you think of the model that can go wrong, it’s usually because it’s got an inaccurate
implementation or realization. The difference there is the implementation is putting the
conceptual model into an IT framework, the realization is actually running it with inputs and
getting results. You can have errors in all of those spots, which I’m going to talk about a little bit
more.
The errors can creep in at the initial development and testing and design or they can come in over
time as the model is maintained and run through lack of proper control because the models are
PROCEEDINGS OF THE CANADIAN INSTITUTE OF ACTUARIES
Vol. 45, June 2014
6
ASSEMBLÉE GÉNÉRALE ANNUELLE – JUIN 2014, VANCOUVER (SÉANCE 19)
constantly being updated and changed and so you have to be wary of that risk and those risks
throughout the model’s lifetime. But it’s not just in the model per se or the calculation part, of
course, it’s also in the development of assumptions and their application and in other aspects and
again, I’m going to expand on that a little more in a second.
Up above, we talked about error or loss resulting from the use or the misuse. If a model is not
suited for the purpose it’s used for then that’s an error. It’s not giving the kind of quality of
results or it’s not really been designed to reach and support the decisions or the conclusions or
the measurements that you wanted to use it for, that is inappropriate and something the actuary
has to be concerned about. Those kinds of misuse can result either because somebody
misunderstood, whether it was the actuary or somebody down below, misunderstood what was in
the model or what it was for or negligence because they didn’t take the care to find out or they
weren’t educated enough to find out or merely because there’s a lack of alternatives; that was the
only model available, that’s what we had to use. All of those are bad situations to watch out for.
Model risk is present in all stages of the model life cycle: in its original specification, in design,
in its implementation, including the initial validation and documentation—was that sufficient? In
the operation of the model in the production environment, but also in the model maintenance and
update and how it changes in the production environment over time. So model risk might also be
considered not only in all stages but also in the elements that are related to the model’s function.
By that, I mean if you think of this as a graphical picture of a model or the internal triangle, may
be the software component or the model engine that does the calculations, it relies on various
pieces of input on the left side, in-force data, on the bottom, various assumptions or tables that
drive the description of the product in the way it’s illustrated and cash flows are produced, global
scenario, the economic assumptions and then the output, how are results calculated results
transmitted out of the model, further transformed and communicated to people? There’s a risk of
error in all those parts of the use of the model and they all have to be considered when you’re
talking about model risk and mitigating it.
But it’s not just this model in terms of this function and the actuarial calculation engine and what
it’s doing, it’s also the whole cycle of the enterprise application. All the way from extracting the
original source data, whether it’s business data for the in-force or experience data for developing
assumptions, and moving it through the cycle and especially in a production cycle to the
calculation engine, the production results and the storage of those results back into a corporate
database. The whole cycle of the production run has to be considered and even that’s not
sufficient. It’s not only that but it’s the software operating system, the IT infrastructure, the
hardware in which it’s operating because changes in those things can produce changes in model
results, unexpectedly, but it can happen.
I’d like to go back here to the final dimension and expansion on this. It’s not just the IT
infrastructure and the IT model and application in an enterprise setting that you need to think
about; it’s the people that are involved in using it and working with it. I go back to this quote
from another report from the SOA: “The challenges in a PBA world will be solved through
collaboration and people exercising wisdom and judgment to make actuarial and business
decisions in the face of ambiguity.” OK, but look at the diagram. All of these aspects of the
model, the models, the hardware, the reporting tools, the processes, the governance, the insights,
the judgment, are elements that they were concerned with in this report, but they all revolve
Vol. 45, juin 2014
DÉLIBÉRATIONS DE L’INSTITUT CANADIEN DES ACTUAIRES
JUNE 2014 ANNUAL MEETING – VANCOUVER (SESSION 19)
7
around people, their understanding of these things and the way in which they apply them and
maintain them and change them over time.
Risk mitigation, if you want to control and lower the risk in your models, is primarily addressed
through proper model governance and that is one of the aspects that I want to talk about today in
a little bit of depth and will be talked a little bit more by Warren, perhaps, in his remarks.
Thorough initial validation of the model design, model implementation, model realization is
obviously a key first step but it’s not just the initial validation; it’s a robust control model
maintenance throughout that model’s lifetime as it’s used and changed, to prevent unauthorized
and undetected changes. All changes should be what you want to do and exactly what you
expected.
Analysis and testing of model results can help during the runs, the production runs, and
afterwards. Testing those results for consistency with prior runs and with expectations for
reasonableness and for a further analysis and attribution of the changes between what actually
was produced in the previous quarter, what was expected to be produced in the next modelling
run, production run, and what was actually produced because of changes. Validation isn’t enough
to do initially; it must be periodically repeated and hopefully supplemented by a complete
independent review. Those are all tools to help you make sure that you’ve understood your
model, that you haven’t lost some understanding of it and that the model is still relevant and
appropriate for its use.
There’s also operational risk in here and I want to talk just a little bit about that. One of the key
operational risks related to models is human error or key man risk. And if we drop down to the
final bullet there, how do you address that? You address that by proper documentation, so the
people who go to use the model can understand what it’s for and benefit from the results of
previous people’s review; proper education; proper training; and proper disclosure so that the
people who are looking at the model’s results understand what its limitations are.
But it’s not just human risks that enter into operational; it’s also technology risks, technology
changing, technology failing, technology not coming up to expectations and even in the worst
case, disaster recovery, something that takes down the whole system at a site and do you have a
way of recovering and reproducing what can be a very complex IT environment as well as model
and getting back up to speed in short order.
We need controlled management of production models to mitigate this risk and that means that
all components of the production model must be managed in a separately isolated production
environment in my view. User groups with defined roles and permissions and using production
models only for production runs. I won’t go through the rest of these components of production
model and good governance procedures. Maybe Warren will talk a little bit about those. But this
is a challenge, this is really a challenge for many companies. Not everybody; every company is
different. But we are amazed sometimes at the challenges we see and part of this is because
historically, actuarial models have hived off from production models, production applications in
key ways.
The IT area has typically, and especially in days gone by, were totally in control of production
applications and you can see the characteristics there. I liken it to a castle with a moat, you know,
with restricted access to it, all governed by the IT people as soldiers at the gate. Whereas
actuaries wanted to do free and easy modelling applications, they wanted to play in their sandbox
PROCEEDINGS OF THE CANADIAN INSTITUTE OF ACTUARIES
Vol. 45, June 2014
8
ASSEMBLÉE GÉNÉRALE ANNUELLE – JUIN 2014, VANCOUVER (SÉANCE 19)
to do pricing and research and ad hoc investigations and so they moved to PC technology and the
IT role was often minimized. That has produced a challenge in that there’s a strained relationship
in many cases between actuarial and IT, very little involvement sometimes in the modelling
process except for providing the high-end IT infrastructure and even then they don’t really
understand the demands of the models very well.
We’ve drifted apart from the IT area because of the reasons said before and we have because we
needed direct access to the data for flexibility and speed of response so we grabbed it and used
our own tools on it and we’ve used manual processes to further modify and use model results.
All of these things are challenges in a production environment and have to be addressed by
improved governance.
Now, what’s the second challenge? The second challenge is that models have often evolved into
a silo type of environment within functional areas pricing, valuation, and planning, separate
models, separate software often has been evolving and used, different lines of business have used
different models because they needed to respond quickly, they had control of their own software.
This has produced a bad environment for producing high-end corporate models and for the
consistency of modelling results throughout a company. It’s amazing that companies haven’t
been able to come to grips with this and I think improve the situation.
So moving forward as my final slide, I’d like to say that there are two opportunities here. First
that I would like to throw out, there are many more and many more issues to discuss but I think
that models and companies should be considered much more as a corporate asset, not as a
departmental tool. The company should develop and support model governance policies that are
applied uniformly across the company and that they use model stewards, model councils to direct
and improve practices and achieve more consistency and get the best benefit out of their models.
Second, to get the best benefits, they’re probably going to need IT or vendor assistance or both to
achieve the kind of production environments that I think are really called for by today’s
governance standards. That includes roles, permissions, access and so on and proper use of
development environments versus production environments as well as a full automation of those
production models from start to close. All these things are possible and I’m looking forward to
great progress in the future and I look forward to any questions that you have at the end of the
session. Thank you.
(Applause)
Moderator Rioux: We will keep a few minutes at the end of the presentations to ask questions
from the speakers.
Speaker Warren Rodericks: Hi, my name is Warren and I work for Deloitte along with JeanYves, and actually I used to work with Trevor at GGY a few years ago. I wanted to just talk
about the . . . you know, I do a lot of audits now and I want to talk about the control environment
from the perspective of an auditor and what we kind of look at. So I wanted to start with a
discussion of the regulations and the framework and what drives the requirement. I was hoping
that I wouldn’t end up giving the topic right after lunch, but here we are.
I guess the genesis, I have to start with this, the genesis for the modern requirements comes from
SOX, and so all of this extra work that you have to do for controls and documentation you can
basically just blame on Enron.
Vol. 45, juin 2014
DÉLIBÉRATIONS DE L’INSTITUT CANADIEN DES ACTUAIRES
JUNE 2014 ANNUAL MEETING – VANCOUVER (SESSION 19)
9
If we start with SOX 404, it’s a section of SOX that talks about the requirements for internal
controls over financial reporting (ICFR) specifically, and what those requirements say are that
management has to make an assessment of its control effectiveness and then on top of that, the
auditor needs to provide assurance to that assessment to say whether it’s good or bad. In addition
to that, the SOX also requires you to use an internal control framework. It doesn’t say
specifically what framework, but you have to use a framework when implementing and
designing controls within a company. That’s kind of what the U.S. rules say and SEC-registered
companies need to abide by.
In addition to that, SOX 101 also incorporates the role of the PCAOB, the Public Company
Accounting Oversight Board, and so for an auditor those are our auditors, those are the people
that we’re afraid of. They put out a bunch of additional guidelines which provide additional
insight as to what the internal control framework and what ICFR means and how you should go
about implementing it, and that’s in auditing standard number five.
In Canada we followed suit shortly after the U.S. did and after the whole, you know, Enron and a
bunch of other companies collapsed, we formed CPAB, the Canadian Public Accountability
Board and they’re the equivalent of the PCAOB. They came up with national instrument 52-109,
which is the equivalent of SOX 404, and it says very similar things with a few exceptions, and
that requires the establishment and maintenance of these internal controls over financial
reporting as well. They also mandate that you have to use one of these internal control
frameworks without actually again telling you what that means, and then management needs to
certify that they had some hand in or was responsible for designing the controls and they have to
give an evaluation of the effectiveness of their controls as well as discuss any weaknesses they
found in their assessment.
When we speak of controls, we mean specifically the controls over financial reporting. There are
controls for all kinds of things if you think of, you know, a factory, quality assurance, or there
are controls that make sure you have to do things to a certain standard but these rules specifically
deal with financial reporting controls, all of the processes that are involved in financial reporting
and therefore, what we do as actuaries with our models all falls into the scope of this.
As I said, there’s no specific opinion required this time for the Canadian rules. That’s the main
difference: the auditors don’t actually have to opine on whether a company’s internal controls are
sound or not. But as an auditor, it’s still something that when we come in and we’re bothering
you for stuff and checking through your books and things, that’s still part of what we’re looking
at. We’re still looking at internal controls, so either way if you’re an SEC registrant or not, it’s
something that auditors will take a look at.
I mentioned that internal control framework has been peppered around these rules without
actually explaining what those are, so I’ll talk about that a little bit. It’s very, very dry and you
don’t really want to have to read through the whole stuff; just try and take away some of the
abbreviations I throw out there.
The main one is the very first one, it’s called COSO, and what that one is it’s the Committee of
Sponsoring Organizations. It’s basically a group of accounting associations in the U.S. that got
together and they came up with a framework for controls and this is the one that you mainly see.
I think it’s like 80 plus percent of companies use this framework when they’re coming up with
their design and implementation of internal controls in their companies. There are a couple of
PROCEEDINGS OF THE CANADIAN INSTITUTE OF ACTUARIES
Vol. 45, June 2014
10
ASSEMBLÉE GÉNÉRALE ANNUELLE – JUIN 2014, VANCOUVER (SÉANCE 19)
others here; there’s this COCO one, which is basically the Canadian equivalent, and then there’s
the Turnbull Guidance, which is the UK equivalent. All three of those, although we’re talking
about financial reporting controls, all three of these are sort of entity-wide. They deal with not
just financial reporting but they also deal with operations and they deal with other things, IT and
things like that. Then there’s the last one, which is COBIT and that’s an IT-specific one, and that
deals specifically with information technology related issues. It’s related to COSO. It’s very
popular. If you ever look up internal controls, which I hope you never have to, then you’ll
probably run into this one, it’s a fairly popular one.
As I mentioned, the main framework is called COSO and it’s meant to cover operations, it’s
meant to cover financial reporting, and it’s meant to cover compliance. Compliance means to
laws and to standards and things like that, and the relevant one obviously is financial reporting
for us. The COSO framework defines what an internal control for financial reporting is as a
process designed to provide reasonable assurance regarding the reliability of financial reporting.
Now, this COSO framework it doesn’t . . . It’s really loosey goosey. It doesn’t say, “Use this
specific control in this instance.” It’s just a set of categories and it’s a set of principles and that’s
it; you’re sort of left to interpret what it means afterwards.
I’m just going to go through what the categories mean just on a high level. There are five
components and they have 17 principles and the very first component is the control environment.
What the control environment is trying to do is sort of set the tone of the company. So the
principles involved there have to do with roles and responsibilities and ethics and driving a
culture within the company that makes reporting and makes openness and integrity sort of key
within the organisation.
The risk assessment component deals with the identification of risk and setting objectives. The
control activities component, that’s the actual designing and building and maintaining of controls
themselves. That’s the part that’s probably the most relevant to us and that’s the part where, you
know, involves the most work, so that’s the day-to-day stuff, the documentation, the sign-offs;
all of that stuff kind of falls under that bucket. Then we have information and communication
and that’s mainly meant to cover the communication, internally and externally, to make sure that
it’s open and clear and there’s no hindrances of communication. So, for example, whistle-blower
hotline or something like that, establishing that these avenues are here for people to
communicate externally and internally. And then there’s the monitoring bucket, which basically
covers the ongoing monitoring of the controls activity and the reporting of any deficiencies.
OK, so just a word on how controls are created like the different types. As auditors we basically
look at two specific types of controls when we’re dealing with actuarial controls, and those are
preventative and detective. Preventative controls, these are the controls that help to stop or limit
a bad event from happening. You might think of systems access, like limiting systems access,
limiting access to your actuarial models or limiting access to data as well as separating roles, so
having someone who’s the doer being different from someone who’s the checker. These help to
prevent fraud or errors or things like that, so it’s a preventative-type control.
Then there’s the detective type control, which is meant to find and try to fix any kind of errors or
risk events that happen after the fact and so doing reconciliation, recalculating policies, things
like that, those are the kinds of things that we try to find it after it’s happened.
Vol. 45, juin 2014
DÉLIBÉRATIONS DE L’INSTITUT CANADIEN DES ACTUAIRES
JUNE 2014 ANNUAL MEETING – VANCOUVER (SESSION 19)
11
From our point of view, the preventative controls are the better type; I think it’s kind of obvious.
In addition to this, there are all kinds of other types of controls that you might [find], if you ever
decide to look it up. There are detective controls which actually don’t have anything to do with
the financial reporting itself but help create the culture and influence the behaviour. For example,
setting up a code of conduct, things like that, that’s the kind of control that aims to influence the
behaviour to get the result that you want. There are mitigating controls which deal with limiting
the damage once a risk event has happened and the main one you might think of might be
disaster recovery process.
With respect to risk assessment from our point of view, I’ve got a framework here of what an
actuarial process looks like, and I know someone is going to ding me for it because it doesn’t
look like their model. The way we look at this, we look at a process and then we break it up into
four general categories of risk and this is the way we at Deloitte do it.
The first category would be data, and within the data then, there are different activities that we
would look at in order to ascertain whether data has been handled properly or not. That often
almost always actually involves looking at controls around data. Data controls, for example, that
we might consider would be access, who has access to the admin systems, once extracts have
been created, who has access to those, and then every time data moves from point A to point B to
point C or gets manipulated, doing totals, checking, reconciliation, things like that to make sure
that you haven’t lost anything or gained anything along the way.
In terms of assumptions, the second risk area that we would look at would include things like
impact assessments, the assumptions review and sign-off process. These are a couple of the types
of controls that we would check when we’re doing an audit. Looking at any kinds of controls
around the experience study process, looking, for example, between if you make a change in
your model based on assumptions, looking at the pre-model assumption screen and the postmodel assumption screen and doing comparisons and things like that, those are the kinds of
controls we might consider looking at as part of assumptions.
As part of models, some of the things Trevor already mentioned, like the version control, if
you’re implementing a new model or you’re updating it, things like back testing, waterfall
testing, the sign-off, the report, is it a different person who signed off on the model compared to
the person who actually implemented it, because you don’t want them to be the same person.
Those kinds of controls we would look at from the model point of view.
Lastly, there are the results. For results, we would consider anything that has to do with the
aggregation of results or the reporting of results or the validation, so things like checking policy
calculations, checking source of earnings, is certainly something that we always look at as a
control to check if the results make sense or not, trending analysis, things like that. So that’s kind
of the overall approach that we take, the risk-based approach that we take to looking at an
actuarial model or process and the kinds of controls that we would consider.
Given that, there are a number of areas of improvement that we see from day to day, not just here
in Canada but in the U.S. and in Asia and everywhere. They’re generally very common actually,
the areas for improvement, areas to be strengthened, and one of the main ones is always
documentation. I think everybody probably would say that that was the first place that controls
can be strengthened is within documentation.
PROCEEDINGS OF THE CANADIAN INSTITUTE OF ACTUARIES
Vol. 45, June 2014
12
ASSEMBLÉE GÉNÉRALE ANNUELLE – JUIN 2014, VANCOUVER (SÉANCE 19)
So formal documentation of a lot of processes, a lot of times we have processes that are just
maybe ad hoc or not officially formalized, and in particular I think when you have a process
where you have a reviewer checking something that someone has done and you would expect the
reviewer to be in a position to challenge what’s been done, whether it be a model implementation
or an assumption change or something else, there may be some kind of documentation around
the fact that there was a meeting held but the actual challenge itself, if there are minutes of a
discussion where you could see that the reviewer said, “Well, I disagree with this, it should be
this”, or “have you considered this”, or something like that, that’s something our auditors really
like, actually; not just actuarial auditors but all auditors. Being able to see that there was actually
challenge, documented evidence of it is something that’s always good and it’s something that
we’re always pushing for.
On top of that, there are change management-type controls which can always be improved, that’s
just across the board, but in particular I think spreadsheet management is probably an area of a
bigger improvement just because they’re easier to manipulate, people feel more comfortable with
them so there’s less rigour around the type of controls that you see. Like access, for example, to
spreadsheets, especially material ones, and you can have material spreadsheets, tend to be far
looser than they would be for actuarial software, like a third-party software or something like
that. Again, the version control, the documentation like a control log, for example. Every time a
spreadsheet is changed, it’s not that often that you see a control log—you know, this person
signed in, this is what changed, this is the date of the change and maybe a sign-off or something
like that. That’s always an area for improvement.
A couple of other areas that we noticed, model validation can always be more than it is, if you
implement a new model, doing sample policy checks and things like that. It’s obviously an issue
of timing but there can always be more done there. And then another one is the key person risk.
This is especially true when you have a legacy system, so you have somebody who is the expert
in this one model, that’s been there forever and knows everything about the data and the data
format, but they’re the only person who knows. So it’s very difficult to get somebody who can
check that person’s work and if that person leaves, then you’re on the hook; you need better
documentation of what that person does to be able to deal with the risk of that person leaving.
That’s my experience about the auditing perspective, what we see as auditors and kind of the
overall framework, and so now I just want to talk a little bit about the survey that was done by
the SOA. This survey was done in 2012. I think they approached 100-plus companies, they got
30 respondents, they asked 55 questions over six different categories. You can easily just Google
it and look it up. It’s pretty good actually; there are some interesting things here.
One caveat, though, that I would make, and I think that maybe we’re going to discuss this as a
panel at the end of this, is this survey was to U.S. companies who were subject to U.S. rules.
They have different kinds of products, they have more reporting regimes than we do, they have a
different environment where there are different types of software that are used for different
products, which is not nearly as true in Canada. So the responses are a little bit different—you
have to take them with a grain of salt, I think. But the categories of questions that were asked in
the survey cover governance standards, they cover the modelling process, they cover systems
access and change control, they cover model input management and model output management.
I’ll just go through the results of that and just for the record, the one that they identified as
needing the most improvement was the third one, the system access and change control.
Vol. 45, juin 2014
DÉLIBÉRATIONS DE L’INSTITUT CANADIEN DES ACTUAIRES
JUNE 2014 ANNUAL MEETING – VANCOUVER (SESSION 19)
13
Quickly going through this, the governance standards, the recommendations they made that came
out of this survey include a formal model governance policy, the regular review of models and
processes against that policy, and developing the corporate culture that aligns with the policy.
It’s similar to a lot of the things I’ve already said and things that Trevor has already said.
In terms of the modelling process, that category of questions, the first recommendation is the
consolidation to one model, one platform and where that’s not possible then to put extra controls
around it. I think we as a country in Canada are much closer to that than they are in the U.S. for
reasons that I mentioned. There’s not just one dominant software in the U.S. like there is here.
The other recommendation is the establishment of the model steward role and I think that’s
something that’s become very popular now in the U.S. with all the financial transformation-type
initiatives going on. Having one person who’s outside of the production process, who’s there to
kind of enforce the rules, make sure that people do what they’re supposed to do, following the
governance process, things like that, that’s becoming a common theme now, especially with so
many moving parts within an actuarial model or process.
Systems access and change management, there were formal change management processes in
place, that’s what they’re recommending is to have these actually documented, as well as a
calendar for internal model releases and updates.
Under model assumptions management, automated assumptions input into the model which, I
don’t know if that’s the exact right answer, the thing that I would recommend but we can discuss
it afterwards, a formal sign-off process for setting model assumptions, which I think most people
do, and document and analyze the impact, so an impact assessment document whenever
assumptions change, which is very common as well.
Under model input management, automated model input, which is very common, and then
analytics to perform test model input, so any kind of analytics or testing that automatically will
check whether totals are going into the model equals totals coming out of the model, for
example, those kinds of tests.
And then lastly the model output management, so having standardized model output, having the
same type of model output for every product having the same, cash flows presented the same
way, reserves presented the same way, other values presented the same way. Lastly, to store the
model output in a data warehouse so analytics can be performed on it later.
That’s the recommendations. There are more than just this; within that paper they talk about the
process, the deficiencies they saw in relative companies and things like that. The survey is
interesting if this is a topic of interest for you. I would recommend reading it if you haven’t
already. So anyway, that’s where I’ll leave off. I’ll pass it over to Sylvain and we can go from
there. Thank you.
(Applause)
Speaker Sylvain Fortier: I’m kind of happy that I haven’t seen your slides before I made mine
because I could have repeated exactly the same but I took a different angle, first because I’m
here presenting more the view of the banks. I will concentrate on the credit risk models,
governance in a bank. You may be aware, and also I’m concentrating when I’m speaking of
credit risk, I’m concentrating on lending portfolios of banks, those loans that we do to
corporations, small and medium businesses, real estate developers, and consumers.
PROCEEDINGS OF THE CANADIAN INSTITUTE OF ACTUARIES
Vol. 45, June 2014
14
ASSEMBLÉE GÉNÉRALE ANNUELLE – JUIN 2014, VANCOUVER (SÉANCE 19)
In Canada, we have two standards of calculating capital for banks. We have the internal ratingbased approach, which is based on internally developed models, and also we have the standard
approach. The standard approach doesn’t need models to calculate capital; it’s essentially risk
weights that we apply to different portfolios. But many of the standard-approach banks use
internally developed models. They don’t have the same regulatory requirements as the internal
rating-based approach bank.
In Canada, there are the big banks, national and up, that are in internal rating-based approach and
the others are in standard. I’m part of an organisation, I’m from Laurentian Bank, and Laurentian
Bank has started the process to move to the internal rating-based approach and it is the first bank
in Canada to take that decision by itself. We weren’t allowed in 2004 to move forward to the
internal rating-based approach and even though we asked OSFI to move to that approach, they
refused and I think it was a good decision at that time.
The next slide, the next few slides, and I’ll stay most of the time on these two slides, is really
based on the internal rating-based approach that the big banks apply. First and foremost, what
banks do is they segment their assets into different segments. The segment itself has a
homogeneous risk profile, and for each of those segments there’s a set of models, policies, data,
process, systems, that are applied to those segments. We call this a risk rating system, so a
system in a way—it’s a very large definition.
Some of those risk rating systems share policies, IT systems, controls, but they never share or
rarely share models and data. For each of those risk rating systems, we estimate or we have four
parameters that we use models for. At the borrower level we have the probability of default and
at facility levels we have the loss given default, the exposure at default, and the maturity.
Maturity just applies for corporate loans, not for consumers.
Then what we have, let’s say the governance itself is grounded on four control functions and
policies and supervision by senior management and the board. The four control functions that
we’re looking at is independent validation team or function, the loan review—and I’ll talk about
them later on in more detail, so loan review, self-assessment, and internal audit.
It’s easier to see how these controls and supervision are applied in time when we look at the life
cycle, the model life cycle. Essentially we have five stages and I took my slides from a
presentation we did or training we did to the Board and I just reformulated them because the
board itself had a hard time to understand the life cycle, so it will be good if you told me or
feedback from you that you understand the credit life cycle that I’m going to present to you
because I just revamped everything.
We have five stages: develop, validate, approve, implement, and the last one, which can last
many years, is use and monitor and then we have those control functions that work around those
stages.
The first stage is the model development. At the model development, there are many things that
happen but we must follow a rigorous process and here there are different layers of supervision
that we can have and what I promote is to have a development committee where it follows the
main steps of the model. This committee is formed of business people that have a quantitative or
at least they can have a risk background or a risk expertise so they can bring their expertise to the
table. There are people from the credit group where they have this expertise and finally the
model team. Other groups can bring other stuff or their expertise to the table, like the IT group
Vol. 45, juin 2014
DÉLIBÉRATIONS DE L’INSTITUT CANADIEN DES ACTUAIRES
JUNE 2014 ANNUAL MEETING – VANCOUVER (SESSION 19)
15
can be present at the data selection stage where they can bring their opinion on data quality or
what can be used in the model or not. Also the loan review, which looks at many loans, they can
also bring their expertise or their opinion on the quality of data.
The main stage that the committee will look at is the definition of the purpose of the model. Then
they will look at the selection of the data that it’s going to use, then the variable selection, and
finally the model selection itself.
I’m also putting in place another, let’s say, committee—I don’t like to call it “committee”, but
let’s say “group”—group of experts more in the development model team where they follow in
more details each step of the model development. So they bring up or they bring together all the
expertise of the department so that each step is done appropriately so that we can deliver a
quality model and not face a problem at the validation stage or at the implementation stage. So
this group is very important to bring the expertise.
All this may seem very excessive. It is useful, but also we need to use judgement in
implementing those controls within the development stage. Not all models are that material so it
depends on the materiality and the impact of the models on the business and on the capital itself.
Once the model is developed, it is passed to the independent validation function. By definition,
this function is independent of the development group and also it is independent of the user
group. What they will do is they will look at different dimensions of the modelling process. I will
just go a few slides because it will be easier. So they will look at those dimensions, first
dimension that they will look at is the default and the explained variable definition. For capital
calculation, we use a default definition that is strict enough so that we cannot vary a lot but still
we have particularities in each product. It needs to be well defined in describing lapse so that the
person doing the validation understands what we did and what are the differences between the
different portfolios.
The explained variable also can vary depending on the portfolios. If we speak of the loss given
default, it depends on the period of recovery, so for some loans it won’t be the same depending
on different processes. If you have a CMHC insured residential mortgage, the recovery process is
lengthier because of the process itself. So we need to describe this in length.
The second dimension is the data that we didn’t use so it’s an internally developed model that
needs to be developed on internal data. Sometimes we’re faced with small portfolios or low
default portfolio so we go outside and we bring in external data. If we do bring external data, we
need to document why we brought these data and is it representative of our portfolio? So that’s
what they’ll be looking at, and how we dealt with it, how we made these data representative of
our portfolios.
The third dimension that they’ll be looking at is the development methodology. There’s many
statistical or, let’s say, expert methodology that can be applied to build a model. We need to
document which methodology we choose, but also which alternative methodology that we
discard and why. That’s what they’re looking at.
Final models: it needs to be documented. Trevor and Warren speak a lot about documentation.
The documentation needs to be appropriate. I will say the documentation must be understandable
by my grandmother. At least it needs, when you give it to the IT group they can implement the
model without coming back to the team and say, “Can you explain?” They need to be able to
PROCEEDINGS OF THE CANADIAN INSTITUTE OF ACTUARIES
Vol. 45, June 2014
16
ASSEMBLÉE GÉNÉRALE ANNUELLE – JUIN 2014, VANCOUVER (SÉANCE 19)
implement with only the model documentation. So if the documentation is at that level for the IT
group, it will be understandable and it will be of high quality.
Then the other dimensions, the initial performance test and back testing, we developed with a set
of data, we should test this model without a sample, out of time data and really backdate data so
that we can be comfortable with the performance, the initial performance of the models. And
also the validation team will, most of the time, replicate, they must replicate those tasks. If they
can’t, that would be a problem.
Benchmarking, most of the time we’ll benchmark our internal models with external models to
demonstrate the performance of those models. We get into a lot of debates with our validation
team on some of those models because when we have a lot of data internally. Even though I
benchmark externally I have plenty of information so that I can prove that my model is
performing, but it’s a debate, they’re looking at that dimension also.
One dimension that is very important for the regulator is the incertitude identification and
conservatism. Models by themselves are imperfect. There are a lot of things that bring bias;
assumptions that we take, data, gaps in our data bring bias, model risk that Trevor speak about.
We need to quantify those biases or at least try to quantify them and then we need to put a buffer,
conservatism buffer, over our estimated parameters. That’s what the validation team will be
looking at and the regulators also.
The compliance regulatory: there’s a lot; we have about 250 paragraphs to document or to meet
and that’s what we need to document it appropriately in our model documentation. Approved,
once there are no errors in our documentation or in the models, the validation team will
recommend the models to a senior management committee or subcommittee to be approved. If
not, if there are errors, we need to develop or correct the errors and resubmit it to the validation
team.
This committee will look at the model documentation, the report from the validation team,
they’ll be looking at the tests that the IT group did. They will look at an impact analysis, what
this model will do on capital, on business, on loss provision and many other aspects of the use of
those models.
Once they approve, and some models need to be approved by the board when there’s a
significant impact on the capital, the board will also have to give their approval of the models.
Then we go into the implementation. The implementation itself is just pushing the button by the
IT group but before implementing we need to do a well-structured change management with the
appropriate training, training manuals, change to the policies that need be done at that time.
Then we get into the last stage where the use and monitoring gets into play. We’ll be using the
models for most of the time three to five years. During each year we’ll do an annual review of
the model performance, so that the model development group will test their models, back test
their models, look at the stability of the population and produce a report saying that the models
are performing or not. And the validation team will review this report and replicate the test and
will also issue their recommendations.
If the models are not performing well enough we’ll go back to either recalibrate our parameters
or we’ll redevelop the model and the life cycle starts again. We had the loan review group: what
they do is they replicate the ratings. They go into loan replications and test every element of the
Vol. 45, juin 2014
DÉLIBÉRATIONS DE L’INSTITUT CANADIEN DES ACTUAIRES
JUNE 2014 ANNUAL MEETING – VANCOUVER (SESSION 19)
17
models and make sure that there’s quality in the answer from the business group. And they
produce their reports. The audit and validation team will use that information to beef up their
reports. Annual self-assessment is . . . I spoke about the 250 paragraphs, regulatory paragraphs
that we need to meet, this is what it needs to be, we need to do our self-assessment each year on
those paragraphs. That makes my presentation. Thank you.
(Applause)
Moderator Rioux: Yes, perfect. Any questions?
Lesley Thomson: I just would like to say something. You guys, you take all the fun out of this.
(Laughter)
We need our tools, we need our sandbox, we need something. You take access away from us,
you take spreadsheets away from us. I mean, we need ways to test ideas, to do what-ifs, to play
with stuff, to figure out what’s right. A model is not . . . You don’t want to turn it into a great big
calculator that all you have to do, you know, a robot can press a button and as long as all the
boxes were ticked. A model is only as good as the person and the judgment that goes into it. You
can pretend that we’re doing stuff that’s accurate but, you know, even you said it, you know
there’s something that’s not quite right, there’s so many inputs and so many assumptions and so
much judgement and so we just add something to it at the end.
So it’s not that I’m . . . I don’t disagree that we need controls, but I think there’s got to be some
kind of way of balancing that, quit fooling ourselves, that we need to spend effort on the other
side of the game too and we need to enjoy what we’re doing. We need to have fun. It gets
awfully bogged down when we have to do all this stuff in the guise of getting something
accurate.
Moderator Rioux: Good point, Lesley.
(Applause)
One comment, Lesley, though, is that I think what you tried to do is define an optimal state. It
doesn’t mean that you want to be there and there might be reason to not be in that state, and I
think what you develop for the optimal state, you need to transpose back to the spreadsheets.
That’s the point here; it’s not that there should be no spreadsheets. You have to have as much
rigor with the spreadsheets as was developed in terms of a framework, controlled framework, for
change management for write-access and so on. That’s the main . . .
Lesley Thomson: It’s probably more, I think I agree with you if you’re talking about a production
environment versus the testing or play environment. If we could have more separation of that it
might be easier, give us more flexibility or freedom to play around without worrying about
whether it’s so accurate but then, once you are pushing the button for the quarter you’ve got it in
something that’s more tightly controlled. I think we end up getting audited for all the stuff that
we think about, like providing evidence that we challenge stuff. Do you know how often we
challenge stuff? We’re doing it over coffee in the morning as we’re walking down. We do that
all the time, every day.
Speaker Rodericks: It’s not a deal-breaker.
PROCEEDINGS OF THE CANADIAN INSTITUTE OF ACTUARIES
Vol. 45, June 2014
18
ASSEMBLÉE GÉNÉRALE ANNUELLE – JUIN 2014, VANCOUVER (SÉANCE 19)
Lesley Thomson: No, it is, though. It starts to get . . . No, I'm just maybe just increasing the
amount of time spent in proving that that I'm doing a good job is rough. It makes it hard to do a
good job.
Moderator Rioux: Do we have other questions from the floor?
Peter ?: Hi, my name is Peter. I just have a question that could be a little bit more specific and
technical and it has to do with the lack of data. A good example would be the probability of
default in a stress event for mortgage loans and operational risk for an institution. When there is
a huge lack of data when modelling, what are some of the things that you would look for to
address that situation? And secondly, if you’re going out and getting external data, what are
some of the things that you would sort of question about that data in order to apply to your
model? And then at a very high level, too, is for the board, what should they be asking when
they’re looking at modelling?
Speaker Fortier: I can speak a lot about what the board should ask because most of the time they
don’t have the expertise to ask the right questions and challenge the development team or the
validation functions. But when you don’t have lots of data or mostly default data, what we do is
we either extend the period of data that we use and sometimes when you extend you get into less
quality data. That’s another issue but that’s one thing, a lot of times we go outside and we go for
a consortium of data and there’s more and more consortium of data, Canadian banks consortium
like RMA, which is an association in the U.S., but they have a database with Canadian data and
it’s more and more easy to be able to pick the data that’s representative of your portfolios.
It will always be a challenge but we can also use, if the data is not well structured enough or we
don’t have plenty of it we can go into, the regulations allow us to build what we call expert
models, which is really based on the experts but then we’ll use more conservatism buffers at the
end. If you have a big enough buffer regulators will . . .
Moderator Rioux: Anybody else want to add anything to this, or . . .
Speaker Howes: Your question is about data that drives and supports assumptions, really, as
opposed to which is critical and key. It’s beyond my expertise as a model vendor. It comes into
actuarial standards of practice in a sense in terms of when you have the leeway to and have to
use judgment to set an assumption, then how do you come up a the best estimate assumption and
what kind of margins do you put on it?
That must be driven by the source of the data, its relevance and its reliability, and there are
standards of practice that talk about that in general terms. CLIFR would be a source for advising
life actuaries about how they do those sorts of things.
If you put it into my terms of what I was talking about the scope of my presentations, I think it’s
really important to understand the source of the data with which you have driven a particular
assumption that is key to your model, it’s important to understand the impact of that assumption
and so that if you are wrong because of the data that you’ve used or lack of adequate data, that
you have an understanding of what that risk is in terms of the results that you’re producing and
that you disclose those things to the key users.
And so those kinds of principles of understanding stress testing, assessing risk and disclosure
apply whether you’re talking about data or whether you’re talking about other elements of the
model construction and use as well.
Vol. 45, juin 2014
DÉLIBÉRATIONS DE L’INSTITUT CANADIEN DES ACTUAIRES
JUNE 2014 ANNUAL MEETING – VANCOUVER (SESSION 19)
19
Moderator Rioux: We’re running a bit out of time so what I would invite you to do is if you have
more questions, please come by to ask your questions. In wrapping up, I want just to outline
some of the elements that were discussed here. The increased attention of the profession on
models is undeniable. The models are much more than software and I hear that it’s very
challenging when you start talking about the assumption testing process, the challenging of the
assumptions and so on.
The segregation of that sandbox from the IT overall framework and the overall one-model
principle. Frameworks influencing the review work are quite important. I invite you to look at
the comprehensive COSO components because COSO is a fairly broad framework that could
inform your review of models. Best Practices from the 2012 survey, this report is fairly solid. We
talked about the fact that the process is really a circular process whereby you go from developing
to using and monitoring the process. Other components include validation, reviewing, selfassessment, back testing are key to having good solid models, and the importance of
understanding and qualifying limitations and bias in the models. I invite you to come and ask
more questions if you have any. Thank you very much.
(Applause)
[End of recording]
PROCEEDINGS OF THE CANADIAN INSTITUTE OF ACTUARIES
Vol. 45, June 2014