
Research conducted on home visitation efforts and other family support initiatives underscores the need for examining the unique impacts various client, staff, community, and service characteristics can have on participant outcomes. While it is tempting for those interested in expanding early intervention efforts to simply select a model program for exact replication in community after community, such action is unwise.
Effectively using available research to enhance parental capacity requires more than replicating a promising intervention. The process involves careful attention to the context in which the program will be placed, the proposed target population it will serve, and the broader health, social service, economic and political environment in which it will operate. Healthy Families America has been developed in light of these concerns.
As HFA efforts are underway in communities with different economic, socio-cultural and political climates, we believe that all HFA pilot programs must include a comprehensive evaluation component. To this end, we have urged HFA partners from the start to build in evaluations of their efforts, even if they are small, so that they can contribute to the knowledge base even as they maximize the effectiveness of their own efforts.
Every HFA site should be implemented with an eye toward eventually conducting a rigorous participant outcome evaluation. Depending upon the size of the site and the resources available, such an evaluation may begin in the site's initial operating year. It is most likely, however, that sites may operate one to two years before a formal, outcome evaluation can be supported. While each site will need to develop an evaluation approach which complements the program's specific service goals, staffing skills and participant characteristics, we believe all evaluation designs should strive to incorporate as many of the following principles as possible.
Provision for a formal control or comparison group
Range of outcome measures
Multiple methods of data collection
Broad measures of outcomes
Standardized measures of outcomes
Integrated data collection system
Subsequent assessment on clients
Post-program interviews or observations on program recipients
Post-program contact with all families who drop out of services
Documentation of the process
In order to attribute client improvements to the provision of home visitor services, you need to be able to report on the experiences of a similar group of parents who did not receive the intervention. This is best done by randomly assigning clients either to receive or not receive the intervention. An alternative to this approach might be to randomly assign clients to different treatment conditions (e.g., more intensive services versus less intensive services or home visitor services versus clinic-based services). If random assignment is not possible, the evaluator should identify a similar group of clients (e.g., matching program recipients in terms of age, race, income, risk of child abuse, etc.) who do not have access to the intervention in question. Change in this group should then be measured with the same methodology used to assess progress in the treatment group.
Any evaluation of home visitor services should monitor changes in a number of areas. For the parents, these areas might include subsequent reports for abuse or neglect; the use of both formal and informal social supports, particularly health care systems (e.g., well-baby visits, emergency room usage); knowledge of parenting and child development; attitudes toward and use of corporal punishment; parental stress; the quality of parent-child interaction; subsequent pregnancies; and employment or educational achievement rates. For the children, these areas might include infant mortality; physical development; cognitive development; and social functioning. While the specific outcomes selected will be influenced by the content and focus of a given home visitor service, all evaluations should assess change in at least three areas for adults and two areas for children.
While standardized methods provide one method to obtain assessment data, any evaluation design will be strengthened if multiple data sources are used. Information with respect to the presence or severity of various functioning problems or risk factors might also be obtained through structured staff assessments and careful administrative record reviews.
Child abuse reporting systems in most states are fraught with problems and are subject to a wide range of human error. Parents who are involved in abusive or neglectful behavior, particularly those with preschool children, may not be reported simply because no one outside immediate family members observe the children on a regular basis. Further, not all reported cases are investigated or accurately documented in the state's central registry. Documenting the number of program recipients reported for maltreatment while receiving services or following termination may provide a crude indicator of a program's failure rate, the absence of such a report in a given case should not constitute a program's only measure of success.
The use of standardized assessment measures, as opposed to clinical assessments of a client's overall progress, will enhance the evaluation's reliability and validity. While a range of measures can be used to assess the constructs identified above, the Child Abuse Potential Inventory (CAP) developed by Joel Milner may be the most relevant to the largest number of programs. It not only provides a standardized measure of the risk for maltreatment across various populations but also can serve as a pre/post measure of program impact. Other standardized measures successfully utilized in assessing home visitor services include the Caldwell's HOME Observation Measurement Environment Scale; Richard Abidin's Parenting Stress Index (PSI); Steve Bavolek's Adult Adolescent Parenting Inventory (AAPI); Katherine Barnard's Nursing Child Assessment Satellite Training(NCAST-II); Helfer, Hoffmeister and Schneider's Michigan Screening Profile of Parenting (MSPP); and Pascoe's Maternal Social Support Index (MSSI).
The information needed to assess program outcomes should be an integral part of a program's intake and periodic assessment efforts. At a minimum, programs should maintain the following information on all families being provided with home visiting services: demographic data on the client and family unit; socio-economic data; initial and subsequent status on all outcome measures of interest; and all services offered and/or provided by the program. While programs may elect to conduct more extensive assessments on some subset or sample of their clients, this basic set of data should be obtained on all families receiving services.
Because of the enormous adjustments a family experiences during a child's first year of life, it is important to obtain frequent assessments of families during this period. This is particularly true for those families who are in the control or comparison groups, where such changes and stress will not be observed or mitigated by service providers.
While often time-consuming and difficult, some effort should be made to document the status of clients at some point (three to six months) following the termination of services. Ideally, program recipients would be interviewed in person or by telephone three to six months after the intervention had ceased to obtain assessments of their status on key program outcome measures. If such interviews were not possible, participant status might be tracked through various administrative records such as the child abuse central registry, Medicaid payment records or welfare records.
A significant percentage of families enrolling in preventive programs drop out of services. Often, these families may be those at greatest risk for maltreatment. Since the ultimate goal of the Healthy Families America Initiative is to offer voluntary home visiting services to all new parents, it is important to understand why these families choose to terminate their involvement in services.
In order to facilitate the future expansion of intensive home visitor programs, evaluation plans should include a detailed accounting of how current HFA efforts established their goals, selected their staff and determined their program content and focus.
For more detailed information on developing an evaluation component including description of standardized measures, sample forms that can be used to monitor client progress and service delivery as well as an overview of the national HFA research network and Program Management Information System, please contact NCPCA at (312) 663-3520.
![]() |
Home | What's New | About NCPCA | Program | Advocacy | Research | Chapters | Special Features | What You Can Do | [Parenting Tips | Youth Focus | Contacting NCPCA | Work@NCPCA | Resources | Child Abuse Facts | NCPCA Events | Reprint Form |
|
|
Gifts
that support child abuse prevention!
Greeting Cards and Angel Pins for the Holiday |