• Safe and secure

  • Quick and easy

  • web-based solution

  • 24/7 Customer Service

Rate form

4.9 Statisfied

948 votes

The Steps of Customizing Witness Certificate Format on Mobile

Search for and design the perfect Witness Certificate Format in the CocoSign template library to autimate your workflow and Choose. If you are still wondering how to fill out Witness Certificate Format, you can check out the below key elements to start.

Note the signing area

Draw your signature

Click "done" to send the form

  1. First, you should note the right form and open it.
  2. Next, view the form and get the point the required details.
  3. Then, you can go ahead to fill out the info in the blank form.
  4. Select the check box if you meet the condition.
  5. Check the form once you fill out it.
  6. Place your esignature at the bottom.
  7. Choose the "Done" button to save the document.
  8. Download the form in Google Doc.
  9. Contact the support team to receive more info to your misunderstandings.

Choose CocoSign to simplify your workflow by filling in Witness Certificate Format and placing your esignature instantly with a well-drafted template.

Thousands of companies love CocoSign

Create this form in 5 minutes or less
Fill & Sign the Form

CocoSign's Tips About Customizing Witness Certificate Format

youtube video

Witness Certificate Format Inquiry Instruction

hi everyone I hope you can hear me thankyou for joining the webinar today myname is Nick lamb on the managingdirector for Morgan McKinley here inHong Kong we're delighted to welcomeBart basin's to present today's successseries webinar which will be onstate-of-the-art credit risk analyticstoday Bart will talk about credit riskanalytics and more specifically aboutlevel 0 data over 1 the model and level2 ratings and calibration Bart is aprofessor from ke Levin in Belgium and alecturer at the University ofSouthampton he's done extensive researchon analytics customer relationshipmanagement web analytics fraud detectionand credit risk management his findingshave been published in well-knowninternational journals and presentedmany international conferences here's anauthor of books credit risk managementbasic concepts which is published by theOxford University Press and analytics ina big data world published by Wileybefore we start I'd like to run for afew housekeeping tips and we will berecording this webinar which will beavailable in the next few days on ourwebsites and we will also send you thelink by email if you have any questionsthey will be taken by the chat featureon your webinar control panel at the endof the presentation please also join theconversation on Twitter at MorganMcKinley using hashtag success seriesand hashtag career Ally we hope youenjoy the webinar today and now I'mdelighted to introduce Bart hello Bartthanks negasonic for the introductionwelcome everybody I would say goodmorning good afternoon hopefully notgood night from wherever you're joiningmy name is Bart Parsons and today we'regoing to talk about a state-of-the-artin credit risk analytics I'll maybebriefly start by introducing myselfNIC already did it but let me justhighlight a few more issues ah so Istudied at the Catholic University ofLeuven which is in Belgium and this isalso the place where I'msituated right now nervin is a verysmall city in belgium it's close tobrussels it's about 20 kilometers awayfrom Brusselsit's known for having the bestuniversity of the country which ispretty obvious because that happens tobe the place where I work and it's alsoknown for having lots of beers rightBelgium is known for beers and there's alot of beers which are being exportedand one of the beers that s very popularoutside Belgium is a beautiful StellaArtois now this beer is being grownabout a couple of hundreds of metresaway from the place I am at right nowthis is not the best Belgian beer thoughstill our choice definitely not the bestBelgian you're both Belgians don't likeit that's probably one of the reasonswhy we export it that much rightunfortunately this forces so or theseminar is not about beers but it'sabout credit risk modeling so I did aPhD on the topic of credit scoring usingmachine learning techniques which Ifinished in 2003 if you would like tohave a copy of my PhD to send me anemail I'm very happy to send it to youas a PDF actually I'll be pleased thatfinally someone shows some interest inmy work currently I have a professor atthe kaehler's and in Belgium and I'm alecturer at the School of Management ofthe University of Southampton in the UKmy research is analytics and anythingthat is related to that so we studyanalytics primarily in a credit risksetting but we also study it for frauddetection for marketing analytics etc ifyou want to have an overview of all theresearch that we're doing in the area ofanalytics feel free to connect on myYouTube Facebook and Twitter account thename is data mining apps on my youtubeaccount you will actually find somemovies discussing some of our most1recent research findings also feel free1to have a look at our website data1mining apps calm where you will find all1our research tracks some further1explanation and some key up key papers1that we published on a topic and also1links to videos feel free to send me and1as well engage to would like to have1some further follow-up information at1the end of the presentation so here you1can see the overview of today's1presentation we're going to start with a1brief introduction situating the1importance of credit risk modeling then1we're going to decompose credit risk1into various components we're going to1talk about a PD LGD and Edie1that will bring us to a multi-level1credit risk mod architecture then we're1going to discuss various challenges that1come with working out the architecture1we're going to talk about data quality1we're going to talk about credit risk1model requirements we're going to talk1about the distinction between model1discrimination and model calibration1which is a huge challenge incredible1smuggling if you ask me we're going to1talk about model monitoring model1validation back testing benchmarking and1then we're going to finish with some1conclusions at the end of my1presentation I will also give you a very1brief demo of an e-learning course that1I have been developing on the topic1let's start with your introduction well1credit risk models are very important1these days why because they more and1more steer the strategic decisions of1banks the outcome of the credit risk1models are being directly used by banks1financial institutions to decide upon1the buffer and equity capital and we1want to make sure that bank is well1capitalized because if a bank is not1well capitalized it's got to bring us1savings depositors into problems in1other words the minimum equity or buffer1that a bank holds are directly beating1determined by credit risk models but not1only credit risk models also market risk1models and when I say market risk I'm1referring to liquidity risk commodity1price risk interest rate risk etc not1only market response but also1operational response fraud based models1and so on so all of these types of risks1are being directly quantified by means1of big data1analytics so we better make sure that1our data is of good quality and our1analytical models are of good quality as1well1now these analytical models are being1more and more subject to regulation I'm1sure you have heard about basel ii Basel13 and sloven C 2 in essence all these1capital Hogwarts are analytical act1words these ad words thoroughly discuss1what are the inputs of the analytical1models and how the outputs should be1defined they also discuss how the1various outputs should be combined in1order to determine capital and1provisions so if we make errors in or1analytical models then the impact of1those errors is now bigger than ever1before model errors in other words are1going to directly affect profitability1solvency shareholder value and the macro1economy as a whole1so again analytics is having a huge1impact in today's business environment1one of these models is credit response1and that's the focus of today's talk and1when I state credit risk you need to1decompose credit risk into various1credit risk components we have three1credit risk components that play a1crucial role the first one is the1probability of default often abbreviated1as PD1that's a probability of default of a1counterparty over a one-year period as1it has been defined by a regulation1worldwide and when I say regulation1worldwide I'm referring to a regulation1that has been introduced by the Basel1Committee the best banking the Basel1Committee that introduced to the Bank of2International Settlements situated in2Basel in Switzerland probability of2default is a probability that's really2very important to be aware of hence it's2a number between zero and one2it's2not the same as a credit score a credit2score can be any number a credit score2can even be negative in in contrast to2that a probability always ranges between20 to 1 so one of the key challenges here2is how we're gonna map credit scores to2probabilities of default a second2important parameter is the LG d stands2for loss given the fault that it's also2measured in a decimal way it is defined2as follows it's the ratio of the lost2autumn exposure due to default of a2counterparty to the amount outstanding2it's also a parameter that should be2estimated using analytics the exposure2at default is measured in currency terms2it represented the amount outstanding2usually this is a parameter that does2not need to be estimated since you can2just look at the amount outstanding of a2mortgage loan or other installment loan2however if you have off-balance sheet2credit exposures like credit cards or2credit lines then the exposure at2default is usually quantified by means2of a credit conversion factor or CCF the2credit conversion factor or CCF then2represents how much of the unused credit2is likely to be converted into credit2upon default these are three key risk2parameters and they are then combined to2quantify both expected and the2unexpected loss the expected loss2represents the long-term average lost it2can just be found by multiplying the PD2and the LGG2and yearly the unexpected loss is2quantified according to a formula which2has been written down in the Basel2Accords the formula essentially2implements a for a Value at Risk model2which takes the PD the LGD and edy and2combines them to get the unexpected loss2we're not gonna zoom into that form2in today's webinar the only thing you2need to know about this formula at this2very moment is that the formula is2linear in terms of LGD and EAD so any2change in LGD and EDD gives you a2similar change in terms of unexpected2loss and capital mind that the expected2loss should be covered by provisions of2the financial institutions whereas the2unexpected loss should be covered by2Basel capital regulatory capital or2equity as we have referred to it you can2see it is visualized in the figure2bottom right right here let's zoom in to2PD and LGD modeling at the moment well2actually the way the risk parameters and2here I'm not talking only about PD LGD2but also about E the way those three2risk parameters are off models is2according to a three layer model2architecture level zero has the data and2when we say data we're talking about2internally collected data data that has2been collected by the bank or financial2institution internally itself we're2talking about external data data from2credit bureaus for example data from2companies such as Equifax Experian dun &2bradstreet for those of you joining us2from Australia data gathered from Vida2etc now we have expert judgment expert2judgment is qualitative data it's based2upon the experts our common sense based2upon the experts business experience2expert judgment data is pretty important2data to help us steer the analytical2models in the right direction it's also2very important to validate the findings2afterwards so this is level zero level2zero is the data and the data2pre-processing and when we say data2pre-processing we are referring to2outlier detection outlier treatment2missing values categorization weights of2evidence coding information filtering2etc the data2we pre-processed at level zero is then2fed to level one level one we're going2to create the model and in a PD context2this will be an application scorecard or3a behavioral scorecard an application3scorecard is a scorecard that is being3used to score new credit applications to3decide upon whether you're going to grab3the mortgage grant the credit card yes3or no3a behavioral scorecard is a scorecard3that is being constructed for ongoing3monitoring so once the customer has3entered your primitive population you3will monitor their repayment behavior by3means of a behavioral scorecard3typically both an application and a3behavioral scorecard are being3constructed using logistic regression3logistic regression is one of the most3widely used scorecard building3techniques in the industry I've been3teaching courses in Australia Hong Kong3in many places in Asia and Europe and3United States and every word you see3logistic regression being used to build3application scorecards and behavioural3scorecards now these scorecards give you3scores and a score is a number that3allows you to rank order to rank order3your obligor in terms of their default3risk a score can be any number the focus3of scoring is to discriminate to3discriminate the risky of the course3from the non risky oblah course using a3continuous score as I already mentioned3the score can even be negative however3as we said before in a regulatory3capital environment imposed by Basel you3do not only need a score but you need a3probability you need a probability of3default so the scores should be mapped3to risk ratings also referred to as3default ratings which should which3should subsequently be accompanied by3probabilities of default this is what3happens at level two at level two we're3going to define the ratings and3calibrate the risk measures3this multilevel architectural framework3is a framework that you typically see3being adopted or PD modeling but also3for LG G and EAD modeling it actually3facilitates the whole modeling exercise3in fact we do not only use it for3modeling but we will also use it for3validation if I will be talking about3back testing later on then practice it3means model monitoring seeing whether or3Myles performed well yes or no the act s3ting can be situated at level zero where3you gotta back that's the data stability3can be situated at level one3where you going to back test the model3discrimination and it can be situated at3level two where you're going to back3this the calibration of the PD measures3so this is a an architecture which we3can not only use for modeling but also3for back testing also for benchmarking3and in fact also for stress testing you3can stress test the data at level zero3you can stress that's the model at level3one and you can stress those two ratings3and the calibration at level two here3you can see some PD performance3benchmarks that we obtain in our3research as I already said when we do PD3modeling you can make a distinction3between application and behavioral3credit scoring application credit3scoring remember focuses on deciding3upon the credit worthiness of new credit3applications behavioral credit scoring3focuses are monitoring the credit risk3of your current portfolio you can also3see LGD included now some of you may3think do these studies refer to3corporate or retail portfolio what3actually it doesn't matter we've used3both corporate and retail portfolios in3our benchmarking studies in application3scoring we found that the average3application scorecard has about ten to3fifteen characteristics when we say3characteristics I'm referring to3predictors such as age3income number of years client number of3years with current employer and so on3the performance of those application3scorecards typically ranges between 703to 85 percent when I say performance I'm3referring to the area under the ROC3curve this is a key performance metric4which banks have to report according to4their local regulators for behavioral4credit scoring we have a similar number4of characteristics I would say between410 to 15 the area under the ROC4performance metric is somewhat higher4because we have a lot more data4available in fact the area under the ROC4curve will range between 80 to 904percent now the story is somewhat4different from LGD the average number of4characteristics that we witness in an l4GG model somewhere around 6 to 8 here4I'm talking about characteristics such4as loan-to-value ratio LTV degree of4collateralization and also a measure of4default risk most of the times in a loss4model or an Ltd model you will see a4measure of default risk popping up the4performance is usually quantified by4means of R squared R squared is a metric4that allows you to measure the4performance in case you have a4continuous target as opposed to PD for4LGD the results are a little bit more4disappointing because the R squared4usually varies between 20 to 30% this is4not a lot and in fact it's worrying why4is it worrying it's worrying because4remember I told you that LGD has a4linear impact on capital so it has a4bigger influence on capital than PD and4the frustrating thing here is that we4are very good at PD prediction we are4very good at predicting default but4we're not very good at loss forecasting4at LGD prediction so that's worrying so4how can we improve the performance of4our analytical PD and our analytical LGD4models where the two strategies that we4can adopt we can work on the model4or we can work on the data let's first4follow the strategy of working on the4model well if we work on the model we4can make use of neural networks support4vector machines or random fourths these4are very complex nonlinear models very4hard to understand they are very4powerful because those models typically4have a universal approximation property4that means that they can approximate any4function to any desired degree of4accuracy over a compact interval that4makes them very attractive however4there's a laws of interpretability it is4very hard to see how the inputs4contribute to the output it's pretty4hard to see how the inputs can4contribute to the PD LGD and E D and so4on4so most regulators think about the R4palla straight in regulation and4Prudential Authority the monetary4authority Insignia poor think about Hong4Kong Monetary Authority4most of these regulators will be4reluctant to approve those techniques4have I seen your own network support4vector machines being used in credit4risk modeling I've seen them being used4occasionally in credit with modeling but4always in a white box setup I can now4discuss in this course how you can do4this but I discuss it in some of my4other courses in case you would be4interested4there are ways of opening up the neural4networks and support vector machine4blackbox but they are not that4straightforward so actually most of the4times they're not being used for credit4risk modeling so a better strategy could4be not to focus on the model but to4focus on the key ingredient of the model4and the key ingredient of the model is4the data we should invest in data and4more specifically in external data4thinking about a FICO score or a bureau4score the Vita or for example in4Australia a bureau score is a score that4reflects a credit worthiness of your4customers developed by a credit bureau4such as Equifax4Experion Vida etc in the United States4there's a well known Bureau score which4is referred to as the FICO score which4has been extremely popular incredibly4smart also data quality is important if4we focus on data on external data and4data quality then the model is still5interpretable because we just changed5ingredients and we can keep on working5with a simple model such as logistic5regression the disadvantage is that we5need additional resources let's focus on5the second strategy data quality is very5important because of the jeido principle5anyone heard of jido before but i'm sure5many of you have geo stands for garbage5in garbage out messy data gives you5massey models so it's very important5that we safeguard organic quality also5because we found through our research5that in many cases simple analytical5models perform really well so the5prettiest performance increase that you5can get comes from the data when I say5simple models what am i referring to for5PD modeling I am referring to logistic5regression and decision trees for Ltd5modeling I am referring to linear5regression and regression trees these5are simple models that I am referring to5hence it is really important to increase5the performance of those increase the5performance or increase their quality by5means of investing in data and the5importance of loss of data management5and data quality programs to summarize5the best way to improve the performance5of a PD LGD and actually also an eg5model is not not much by looking for5fancy tools or techniques but to improve5their quality first5I say data quality then data quality is5a a very very concept at the end of the5day you need to operationalize it you5can consider data quality from various5dimensions undoubtedly the most5important dimension of data quality is5data accuracy here we are referring to5unusual observations extreme values or5outliers and when dealing with outliers5you have to be very careful because5defining an outlier is not that easy age5is 300 years is definitely an outlier5that cannot be true something's wrong5there income is 1 million euros is maybe5an extreme observation but not5necessarily invalid if you have a rich5person then income 1 million euros could5be a violent observation not only data5accuracy but data completeness what5about missing values missing information5can be relevant information I did not5give you my income because I'm5unemployed5it means income is missing but being5unemployed is risky it's important to5notice if you're doing PD or Ltd bond5data definition how are we going to5define the data LGD in my credit this5modeling course we spent one day5discussing LGD modeling the time horizon5how to include indirect costs what about5direct costs what discount factor to use5etc and now we also have a bunch of5other dimensions such as data buys data5recency data latency and so on I have a5PhD students who did some work on a5topic and you can see the various data5quality dimensions that we have5considered in a research I'm not going5to go through them one by one I'm just5going to highlight some categories so5you can see to the left we have some5categories relating to intrinsic data5quality contextual data quality5representation access and so on5obviously some of these data quality5dimensions are overlapping but it is5important to make sure that data is5accurate objective5of good reputation complete etc and we5did a survey with some of the banks5worldwide more than 50 banks and5actually also some Asian banks were5included and the focus was on credit5risk in Linux and the main findings of5our survey work that most banks5indicated that between 10 to 20 percent5of the data suffer from data quality5problems this is quite a lot if you know5that this data is being used for PD and5LGD Mahon5one of the key problems was manual data5entry also the diversity of data sources5and a consistent corporate wide data5representation was the main challenge if5we are banks why do you invest in data6quality then the answer we got was6because of regulatory compliance not6that much because of competitive6advantage but because of regulatory6compliance because the Basel Accords6imposed banks to invest in data quality6data quality is a very tricky problem6because in the short term there's not6much that you can do about it there are6only two ways is that you can cope with6data quality in the short term the first6way is a statistical way that means that6you're going to diagnose data quality6you're going to treat it you're going to6diagnose the outliers are you going to6treat young players and there exist6various schemes to appropriately deal6with outliers such as gapping procedures6or other imputation procedures if it's6really bad in the short term you would6have to rely on external data data that6has been gathered by data pooler's in6the long term it is important that you6completely redesign your data entry6processes make sure that we have6appropriate validation constraints6referential integrity constraints and6make sure that we set up programs for6master data management what are the6requirements then of a good PD LGD and6EAD mod well first of all statistical6performance the6one should have good discrimination and6you can measure that in various ways my6credit risk modeling course we spend6like two hours just talking about how to6accurately quantify the performance of a6PD LGD and any model but models should6not only be half good performance they6should also be interpreted and6justifiable all the models all the P6reality and IDI models that you guys6will be developing will be validated by6your local supervisors and some6supervisors take their job very6seriously6I've heard of some supervisors in6certain countries that actually6completely redevelop the model to see6whether you did a good job yes or number6interpretability6and justifiability is a key requirement6here on also operational efficiency how6much effort is needed to evaluate6monitor and retrain the models6economical cost is also a key concern6what is the accompanying cost of6gathering the inputs may be buying them6elsewhere pre-processing them and6running them through the model and6finally regulatory compliance is also6important the model should be alignment6regulation as it has been opposed by6basa6or solvency I already told you6the difference between model6discrimination and model calibration6model discrimination is what we are6concerned with at level one of our6credit risk model architecture here we6want to do credit scoring application or6behavioral scoring we want to assign6high scores to the good customers low6scores to the bad customers however6scoring is no longer sufficient because6basel ii imposes an extra layer on top6which is the calibration we should have6ways of mapping the scores to well6calibrated probabilities of the events6taking place in all case the event is6default there exist various ways of6mapping scores to well calibrated6probabilities and by doing so it is a6crucial importance that we also bring6the macroeconomy6into the model because obviously the6macroeconomy is quantified by GDP6inflation unemployment rates will have6an impact on our model here you can see6that to the last illustrator to the left6you see a traditional scorecard age6gender and salary this is an application6scorecard and note that in certain6countries you cannot include age and6gender in your scorecard so there's some6national regulation there concerning6privacy and that takes about what6characteristics you can include yes or6no so to the left we have an application6scorecard to the right we have a rating7a rating is a grouping of people a7grouping of public works which are7similar in terms of default risk and for7that rating we then have the historical7default rate depicted you can see that7historical default rate varies between7two to four percent you can see the7historical default rate then if we do7calibration we have to decide how we're7gonna extrapolate in the future are we7going to go for a downturn scenario and7follow the red dots or we're going to go7for an upturn scenario and follow that7green dots are going to go for a7scenario in between so the question here7is how we can move from the left from7these scores how we can move to ratings7and corresponding default rates and7pennies well a first way to do that is7by clustering or scorecard outputs into7Bulls those poles in Europe are often7referred to as ratings in the United7States there are referred to our7segments many Asian countries use the7term pools a pool or rating is a7homogeneous grouping of the Gores in7terms of default risk we do that because7our scores are too fine granular there7are too much details and the pools can7be defined in various ways it's7essentially what we refer to as a7semi-supervised learning exercise we can7define it by mapping unto an agency7rating scale moody stand an important7Fitch or by using decision trees7decision trees are a very handy tool to7map scores into pools and once we have7math or scores into pools we need to7calibrate an event probability a PD7model or an LG to unity mod we're going7to do that using time series analysis7techniques dynamic models Markov chains7simulations and so on that will allow us7to actually create the pools and then we7can have an idea about the volatility of7the pools about how volatile the ratings7are rating volatility is directly7reflected to the rating philosophy and7you have two broad types of rating for7us is being point-in-time7big or true to cycle TTC a point-in-time7rating philosophy means that your7ratings measure credit risk during a7particular point in time so your ratings7are very volatile and true the cycle7grading philosophy means that you're7baiting czar going to measure default7risk throughout a credit cycle and are7more stable in time both can be7quantified by looking at the pools and7how stable they are7you can see our model architecture so7remember level zero the data level one7the scorecard or discrimination level to7bring the macroeconomy into the game and7it's going to allow us to see how the7macroeconomy influences or beanie LGD7and ei Velox a side benefit of this7multi-level model architecture that7becomes very easy to do stress testing7stress testing can be done using7sensitivity analysis whereby we're going7to see what happens to our lgd's if the7loan-to-value ratios are going to7increase by 10% this would be an example7of a sensitivity stress test at level7zero the data level or we can also do a7sensitivity stress test at level one7when we assume that our application or7behavioral scores were going to drop by75% or we could also do a sensitivity7stressed at level two when we assume7that all p d--'s are going to increase7by 10% we can also do scenario analysis7hypothetical or historical now or we7could mimic a Y out of 25 years divided7there seems to be more and more7consensus in the industry nowadays that7a stress scenario corresponds to a 1 out7of 25 years of it obviously7there are many challenges that come with7stress testing I'm sure I don't have to7convince you about that one lack of7historical data correlations breaking7down during periods of stress we should7integrate the risk across the various7credit risk portfolios7etc our models will never be perfect7right you can see a statement from7George box well-known for the box plot8saying that all models are wrong but8some are useful or modest approximate8reality but they will never be perfect8some are actually very bad think about8LGD models where the r-squared is only8between 20 to 30% so how are we going to8deal with this if we if we have bad data8quality or maybe bad model quality then8we have to8calibrate orb rammers in conservative8way for LGD this is often referred to as8economic downturn calibration if we8assume that the LGD is Sam percent then8we're going to put it to 15% or maybe8even 20% to deal with the fact of8imperfect data quality and mod risk the8question is how much margin you should8add to your PD Ltd and EAD measurements8in order to compensate for imperfect8data and model risk and also here there8are some approaches that can be followed8model monitoring is also very important8why do or monitor the gradient8performance in order to set up8monitoring frameworks you have to8carefully diagnose the various reasons8why models may degrade in performance it8could be due to sample effects because8the mods were based upon limited samples8it could be due to micro economic8effects or even internal effects so8we're models need to be constantly8monitored where we say monitoring there8are various types of monitoring there's8quantitative monitoring that refers to8back testing and benchmarking there's8qualitative monitoring that refers to8data quality model design documentation8corporate governance and management8oversight back testing is a very8important model monitoring activity so8here you can see it's a very simplified8example we got four ratings and you can8see the estimated PD that comes out of8her model the number of observations8that we have witnessed in each of the8weighting categories or pools and the8number of defaults so for rating a BAC8this ting would need that we have to8contrast the default rate of 1.7% with8the PD of two percent various test8statistics can be used for this purpose8but when you have a test statistic you8have to adopt the confidence level and8some regulators like the Hong Kong8Monetary Authority have given some very8specific input Hong Kong Monetary8Authority for example advocates8confidence levels of 95 or 99.9 percent8here you can see some various back8testing examples these are all based8upon traffic light indicator approaches8indicating here whether you have data8stability right this is a system8stability and then next here whether you8have model stability based upon the area8under the ROC curve or the accuracy8ratio which is closely related to that8which can then be monitored in time back8testing allows you to come up with a8diagnosis but it's not going to allow8you to specify what to do in response to8what act in response to what finding so8the whole model monitoring or back8testing exercise should be accompanied8by action plans8telling you what to do in case your8model is no robot more appropriately8calibrated or in case your model is no8longer providing adequate performance at8level 1 okay8good key lessons learn from this very8short webinar you see there's a lot to8mention here and I have to briefly go8through the topics lessons I want to I8want you to learn or that the best way8to improve the performance of a PD LT8GED model is not by looking for complex8models but by improving data quality8first a good model does more than giving8good statistical performance we also had8operational efficiency interpretability8economical cost and regulatory8compliance discrimination and8calibration is also very important bring8the macroeconomy into the model we also8introduced the idea of model risk model8validation and action plans in case you8want some further information here you8can see some references you can find9them all of my data mining apps on9website here you can see two books that9I've written on the topic and to9conclude I also would like to give you a9very brief like a three to four minute9overview of a course that I've been9developing on the topic we can then have9a discussion afterwards to address some9of your questions9so about six months ago we started to9develop a course called credit with9modeling which is available as elearning9worldwide so each you can take it9whenever you want at your own pace it's9about 22 hours of videos dev all scripts9and quizzes whereby we focus on the9modeling concepts and methodology it's9not a software course but it's offered9by SAS because I often partner with SAS9to keep to teach courses9you only need a laptop iPad with a web9browser to run the course and you get9one limited one-year unlimited access to9all course material I'm just going to9show you an example so here you can see9the screen of the course credit risk9modeling using SAS and then you can9start it and to the left here you can9see all the lessons every lesson9consists of a couple of videos you can9see for lesson two this is the Basel9backwards here we have videos of four9minutes two minutes discussing all the9backwards right let me just click on one9of them this is audio you can play it9whenever you want right during one year9okay so here we have lessened so on it9credit scoring the Basel backwards level9zero remember preparing the data for9credit scoring classification for credit9scoring how to measure the performance9using state-of-the-art techniques like9ROC curves cockers lifts curves how to9compute the KS distance we also have9some lessons on defining default ratings9this is level 2 using the9semi-supervised learning exercise I9referred to earlier there's also some9science demos every now and then but9science is not the focus of the course9it's really on the methods and the9concepts you can see how we define9wailings the methods wheel will be9discussed there's also questions in9between to actually check whether you9are understood yes or no at the end of9every lesson there's a quiz that you can9take right which is just gonna ask you a9couple questions here we cover PDL to9the AP modeling also validation working9with low default portfolios and stress9testing and you also get a certificate9at the end there's a course certificate9if you finish the course in case you9will have some further questions about9this course you can always send me an9email right my email address was9indicated at the beginning we're on9while we already got sold in quite a9couple of times worldwide so you can let9me know in case you would have questions9case you would be interested always9happy to address those you can send me9an email right here this is my email9address okay so this finishes my short9weapon or I hope you enjoyed it I hope9you learn a couple of new things feel9free to mail me and I'm about ready to9take some of your questions thank you9both there's a couple of questions here9and number one in your presentation you9said that neural networks cannot be used9for credit risk modeling since they are9blackbox models have you seen other9application areas where they are being9used yeah for PD l-g-and-e D modeling9neural networks are commonly not used9the reason is because as you correctly9mentioned there blackbox techniques9inquired with models need to be white9box because they're subject to9supervised remedial I have seen your9networks quite recently being used in9fraud analytics for example a Faroese9Act which is one of the big companies in9the United States provides Krim scoring9solutions but they also provide fraud9scoring solutions to monitor credit card9transactions and see whether they are9fraudulent10yes or no now in fraud detection you10typically don't care whether you have a10black box or white box model because10you're so much focused on detecting10fraud for Isaak in the u.s. they have a10system called Hamilton and thought can10is a system based upon neural networks10to detect credit card fraud besides10being used for credit card fraud I've10also seen being neural networks being10used in marketing settings for example10full response modeling to see whether10somebody is likely to respond to a10marketing campaign yes or no so they are10being used in other settings but not10that much in credit responding okay10thank you thank you both and the second10question can you briefly summarize the10main approaches to develop PD and LGD10ratings all right there that's a good10one I didn't have time to get into it in10a webinar but I'm going I'm going to10quickly give you a high level overview10basically you know if you want to10develop PD10LGD or EAD ratings there are two10approaches that you can follow the first10approach is a mum is mapping onto an10external sway agency if you look at the10rating agencies like Moody's Standard &10Poor's and Fitch they came up with10rating scales themselves right with PD10rating scales as well as with LGD rating10scales so what you could do is you could10use the10grading scales map them against your10internal application behavioral scores10and define your ratings in such a way10that they as closely as possibly mimic10those external ratings in terms of10default rates so mapping into an10external rating agency scale is a first10approach of doing this the second10approach would be by using statistical10methods and a method that we have seen10that is really very good in that respect10is just a classification tree and a10classification tree when only two10characteristics the input of the10classification tree is what we call the10z-score that's the output of the10logistic regression that's your credit10score and the output is a binary label10specifying default yes or no you use a10classification tree with the score as10input and there are one available as10output quantification trees are often10used for that purpose one very important10thing that you should be aware of here10is that your tree is monotonic with10monotonic I referred to the fact that as10the rating goes down the default rate10should increase every now again10then during because of statistical10fluctuations you can have non monotonic10effects occurring so if you would be10using decision trees it's very important10to enforce that monotonic behavior hope10that answers the question thank you both10and and I think we probably got time for10just one more question so are you aware10of any statistical tests for back10testing they take into account default10correlation yeah it's another another10good question so oh yeah oh well let me10briefly go through the tests that are10being used for back testing a PD model10calibration there's a classical binomial10test my normal test is gonna test10waiting let me just go see what I can10find my slide here and just quickly10slides okay here this is the slide so10the binomial test is gonna test every10rating by rating10Julie gonna compare the estimated PDA to10the realized default way using a10binomial distribution it's often used10and it has also I know you're based in10in Hong Kong at least I I think in Hong10Kong the Hong Kong Monetary Monetary10Authority gave some positive reviews on10using the binomial tax they even um they10even included some confidence levels10more specifically they adopt the10confidence levels of 95 to 99.9 percent10so binomial test is one test but it does10not take into account default rate10correlation another test is a Hosmer10lean show test which is a high-score10test which is going to test all of them11simultaneously but again it does not11take into account default correlation11there's only one test which to the best11of my knowledge takes into account11default correlation and that's the VAT11eject past the value check test is a11test which actually builds further upon11the basel model i told you that the11basel model is a value at risk model so11it looks at the 99.9 percent worst-case11default rate using an asset correlation11factor which has been fixed in the11awkward well the same model that vagi11check model can also be used for back11testing and here you can use the same11asset correlations or default11correlations both are related for back11testing and those default correlations11to give you an example for mortgages11they're being set to fifteen percent and11for qualifying revolving exposures like11credit cards credit lines that are being11set to four percent so yes there exists11some tests that do take into account11default correlation one of them is the11fazzy check test great well thank you11very much but I hope you will enjoyed11today's webinar and for the credit risk11community it's been a pleasure to view11our upcoming events jobs career advice11and insightful blogs please do visit WWE11McKinley calm or follow us on LinkedIn11or Twitter11at more than cleanly using hashtag11success series and hashtag career Ally11and we look forward to hearing your11feedback and I hope you all have a11pleasant day morning evening thank you11very much

How to generate an electronic signature for the Witness Certificate Format online

CocoSign is a browser based app and can be used on any device with an internet connection. CocoSign has provided its customers with the most productive method to e-sign their Witness Certificate Format.

It offers an all in one package including legality, efficient cost and flexibility. Follow these key elements to place a signature to a form online:

  1. Check you have a high quality internet connection.
  2. Upload the document which needs to be electronically signed.
  3. Choose the option of "My Signature” and choose it.
  4. You will be given selection after choosing 'My Signature'. You can choose your written signature.
  5. Generate your e-signature and choose 'Ok'.
  6. Choose "Done".

You have successfully signed the document online . You can access your form and send it. Aside from the e-sign selection CocoSign give features, such as add field, invite to sign, combine documents, etc.

How to create an electronic signature for the Witness Certificate Format in Chrome

Google Chrome is one of the most accepted browsers around the world, due to the accessibility of lots of tools and extensions. Understanding the dire need of users, CocoSign is available as an extension to its users. It can be downloaded through the Google Chrome Web Store.

Follow these normal key elements to write an e-signature for your form in Google Chrome:

  1. Click the Web Store of Chrome and in the search CocoSign.
  2. In the search result, choose the option of 'Add'.
  3. Now, sign in to your registered Google account.
  4. Open the link of the document and choose the option 'Open in e-sign'.
  5. Choose the option of 'My Signature'.
  6. Generate your signature and put it in the document where you choose.

After placing your e-sign, send your document or share with your team members. What's more, CocoSign give its users the options to merge PDFs and add more than one signee.

How to create an electronic signature for the Witness Certificate Format in Gmail?

in Today's era, businesses have remodeled their workflow and evolved to being paperless. This involves the signing document through emails. You can easily e-sign the Witness Certificate Format without logging out of your Gmail account.

Follow the key elements below:

  1. Get the CocoSign extension from Google Chrome Web store.
  2. Open the document that needs to be e-signed.
  3. Choose the "Sign” option and write your signature.
  4. Choose 'Done' and your signed document will be attached to your draft mail produced by the e-signature app of CocoSign.

The extension of CocoSign has taken care of your problem. Try it today!

How to create an e-signature for the Witness Certificate Format straight from your smartphone?

Smartphones have substantially replaced the PCs and laptops in the past 10 years. In order to taken care of your problem, CocoSign aids to sign the document via your personal cell phone.

A high quality internet connection is all you need on your cell phone and you can e-sign your Witness Certificate Format using the tap of your finger. Follow the key elements below:

  1. Click the website of CocoSign and create an account.
  2. Next, choose and upload the document that you need to get e-signed.
  3. Choose the "My signature" option.
  4. Write down and apply your signature to the document.
  5. Check the document and tap 'Done'.

It takes you shortly to place an e-signature to the Witness Certificate Format from your cell phone. Print or share your form whatever you like.

How to create an e-signature for the Witness Certificate Format on iOS?

The iOS users would be satisfied to know that CocoSign give an iOS app to assist them. If an iOS user needs to e-sign the Witness Certificate Format, work with the CocoSign app wthout doubt.

Here's instruction place an electronic signature for the Witness Certificate Format on iOS:

  1. Add the application from Apple Store.
  2. Register for an account either by your email address or via social account of Facebook or Google.
  3. Upload the document that needs to be signed.
  4. Choose the space where you want to sign and choose the option 'Insert Signature'.
  5. Draw your signature as you prefer and place it in the document.
  6. You can send it or upload the document on the Cloud.

How to create an electronic signature for the Witness Certificate Format on Android?

The great popularity of Android phones users has given rise to the development of CocoSign for Android. You can insert the app for your Android phone from Google Play Store.

You can place an e-signature for Witness Certificate Format on Android following these key elements:

  1. Login to the CocoSign account through email address, Facebook or Google account.
  2. Upload your PDF file that needs to be signed electronically by choosing on the "+” icon.
  3. Click the space where you need to place your signature and write it in a pop up window.
  4. Finalize and adjust it by choosing the '✓' symbol.
  5. Save the changes.
  6. Print and share your document, as desired.

Get CocoSign today to assist your business operation and save yourself a large amount of time and energy by signing your Witness Certificate Format on the Android phone.

Easier, Quicker, Safer eSignature Solution for SMBs and Professionals

No credit card required14 days free