Computational Methods to Interpret the Pathophysiological data to make sound decisions

· TGI - Informatics
Authors

 

Pain in acute myocardial infarction (rear)

 

Dr. Larry et.al., described a Software Agent for evaluating multiple tests based on a large data base  especially  efficient  when  time  for  making  the  decision  is  critical  for  successful  treatment  of  serious conditions, such as stroke or acute myocardial infarction (AMI).

 

Such a decision  support  system is invaluable in providing  medical  staff  with  information  needed  for making critical decisions, says Dr. Larry.

 

Source:

 

http://pharmaceuticalintelligence.com/2012/08/12/1815/

 

1 Comment

Comments RSS
  1. larryhbern

    The tests used are no longer of great interest, but very little had been done with neural networks when Dr. Isaac Mayzlin wrote the Maynet program based on the study and data by Bernstein, Babb and Rudolph in Clin Chem 1988. Christos Tsokos had previously shown that to use LD-1 and Pct LD-1 (by inhibition of LD using assays with high and low pyruvate to form a ternary complex), the distributional requirements could only be in conformity using the LogLD1 and Tukey’s folded-log Pct(LD1). This mandated a nonparametric density estimation to separate groups. Dr. Mayzlin, a distinguished mathematician from the University of Moscow, wrote a computer algorithm for the IBM PC that executed an artificial neural network (ANN), which is essentially a nonparametric discriminant function. The program had only one output variable, MI vs not, whereas, on would like to eventually have several. This was not computationally possible for another 20 years, and there were a number of problems that had to be traversed, such as the number of predictors, the number of classes, and sample size each group. Even at that time, though, I suggested that Dr. Mazlin look at the effect of preclassifying the data based on Rudolph’s method for determining the optimum decision value for each variable to achieve minimal error. Using the preclassification approach and then training the ANN gave as good a result as was possible. Rudolph and Bernstein had shown that the results obtained on Ronald Fisher’s original discriminant function with petal and leaf measurements were equivalent. The main point of this is that the algorithms that are required in the highly complex and error prone domain of genomic data and proteomics will have to be very good and be free of noise.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: