admin

About

Username
admin
Joined
Visits
320
Last Active
Roles
Administrator

Comments

  • Good question - I think it depends on how you define equitable which is something the CAS isn't overly clear on at the moment. The efficiency test measures how much variation is reduced when you go from manual loss ratios to standard loss ratios.…

  • The Fisher text doesn't give a lot of insight into what the loss based assessment would cover. It's likely it would be used to address mispricing issues such as where the loss frequency may have been higher than expected

  • The key thing here is net premium = premium - outstanding deductible reimbursements. We need to subtract off any deductible amounts we have not recovered to get the premium for the losses the insurer has covered.

  • I think the CAS examiner's report is sloppy here. I agree with you, to guarantee full credit in the future you should check whether the SSE is lowest for the raw data or justify why you don't need to.

    If I had to guess, I'd say the CAS thou…

    in 2012Q5 Comment by admin October 2023
  • When it comes to making predictions in the Couret & Venter reading we have three options.

    1.) Use the overall hazard group relativity.

    2.) Use the raw quintile relativity from the training data set.

    3.) Apply the multi-dimen…

    in 2011.Q2 Comment by admin October 2023
  • Thanks for pointing this out. A new version is available in the PowerPack section of the wiki.

    https://www.battleacts8.ca/8/Excel/PE1_Exam8_v3.xlsm…

  • In general we want to match the scale of the link-function so logging would make sense. However (p12 of the GLM text) when we log a variable the assumption is there is a linear relationship with the logged mean of the variable and our response va…

  • Please could you look at our solution which is available in the PowerPack here: https://www.battleacts8.ca/8/Excel/2017_Exam8_Solns_v1.xlsm

  • No, our solution does already include the tax multiplier of 1.031. It's contained within the round function. You may also verify this directly in the Fisher text as this is their problem #7, solution on page 108 of the PDF

  • The key here is understanding the relationship between the per-occurrence deductible and the aggregate deductible. Losses in excess of the per-occurrence deductible are used to price the per-occurrence deductible. Losses below the per-occurrence …

  • Please let us know the practice exam(s) you are looking at and preferably the worksheet name and we will address this today.

  • I think you'd get almost all partial credit based on your explanation. Here's why I think it's not a complete answer. You say there is a factor of 0.636 for other rating factors yet in the question it clearly says this is a merit rating plan with…

  • This is tricky and the CAS may not always make it clear so be sure to state your assumptions.

    This question asks for the average increase in excess loss which implies it is the aggregate loss amount we're interested in. Ha…

  • Yes, for a 20% quota share without the max ceded loss constraint, the 1-in-100 year ceded loss would be 20%*100m = 20m and so the company needs 100m - 20m = 80m in capital to meet the 1-in-100 year requirement.

    I'm not entirely sure where y…

    in 2016 Q19 Comment by admin October 2023
  • Unfortunately the CAS exam process is pretty opaque so we don't have great insight into how they assign points. Speaking as someone who has spent a fair amount of time teaching in the US university system, graders tend to be under tight deadlines…

  • This is basically the law of large numbers - when the expected losses increase it is generally assumed to be due to more claims coming in rather than the expected claim severity changing. So with more claims we expect there to be less variation t…

    in 2016 #12c Comment by admin October 2023
  • Yes, in the exam you'll definitely need to clearly explain your work/how you applied the method. The new post-exam summary from the CAS said last time there were considerable points docked for unclear calculations :(

    We have reviewed the Pe…

  • You're not doing anything wrong. When Z is relatively small, we will end up putting more weight on the oldest year because 1-Z is large so (1-Z)^(n-1) doesn't go to 0 fast. It's very possible for the oldest year to receive a large amount of credi…

  • Thanks for pointing this out. It is an oversight on our part. We have updated the solution file in the PowerPack.

  • I think the difficulty here lies in converting between word equations and symbols when there is inconsistent notation throughout the industry. On page 40 of the Fisher text, footnote 19 says:

    "If a retro policy also has a per clai…

    in Step 10 Comment by admin October 2023
  • Yes, it is a coincidence that the annual basic limit premium for Premises/Operations ($75,000) and Products ($25,000) sum to $100,000 which is the same figure as the combined single limit per-occurrence for bodily injury and property damage (Prem…

  • I think you're confusing the idea of a variable expense percentage with a risk multiplier.

    In Fisher.RiskSharing we see the retrospective premium is R = (B+cL)T. If c is the variable expense percentage then the retro premium fails to accoun…

    in Step 10 Comment by admin October 2023
  • For a quantile plot, the source text (p.77) actually says all we need to do is plot the average actual pure premium and the average predicted pure premium for each quantile. Since we have two separate plots with potentially very different scales,…

  • Let's take a simplified example. I have two auto policies, one with a 25k limit and another with a 50k limit. Assume drivers who take either policy have the same loss frequency. Further assume that any loss is a total loss on the policy.

    Th…

  • TT claims are used as the exposure base for the lower frequency, higher severity injury types. Frequency is measured by per $100 of payroll and severity backed into. All raw frequencies and severities by injury type are converted into relativitie…

    in 2012Q5 Comment by admin October 2023
  • A key idea of Couret & Venter's paper is the TT counts serve as an exposure base for the lower frequency but higher severity accidents. This is why we measure everything relative to TT claims because we can then scale.

    You have the set …

  • Yes, use the one which is closest in absolute value. There isn't a restriction about going over or being under.

  • We reviewed the Fisher source material; on page 40 (46 in the PDF), footnote 19 says "If a retro policy also has a per claim loss-limit, the charge for that is sometimes considered part of the insurance charge, and sometimes cons…

  • As you say, a continuous approximation model isn't really defined anywhere in the source material. The closest the source comes is mentioning that applying Panjer's algorithm is an example of a collective risk model.

    In Panjer's algorithm w…

  • A driver with 0 accident free years has had at least one claim in the last year. Suppose there are N such drivers in this group. A key assumption of Bailey & Simon is the observed claim frequency, lambda, for the class is the same for all sub…