Read PDF Rough Sets and Intelligent Systems - Professor Zdzisław Pawlak in Memoriam: Volume 1

Free download. Book file PDF easily for everyone and every device. You can download and read online Rough Sets and Intelligent Systems - Professor Zdzisław Pawlak in Memoriam: Volume 1 file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Rough Sets and Intelligent Systems - Professor Zdzisław Pawlak in Memoriam: Volume 1 book. Happy reading Rough Sets and Intelligent Systems - Professor Zdzisław Pawlak in Memoriam: Volume 1 Bookeveryone. Download file Free Book PDF Rough Sets and Intelligent Systems - Professor Zdzisław Pawlak in Memoriam: Volume 1 at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Rough Sets and Intelligent Systems - Professor Zdzisław Pawlak in Memoriam: Volume 1 Pocket Guide.

IRICT Kavitha, T. Janusz, M. Szczuka, S. Stawicki, D. Riza, A. Bergmeir, C. Cornelis, F. Herrera, D.

Rough Sets and Intelligent Systems 2013, Volume 2

Komorowski, Z. Polkowski, A. Szczuka, J. RSCTC Predki, R. Slowinski, J. Stefanowski, R. Susmaga, S. In: L. Skowron, Eds. Predki, S. The book includes 10 chapters where interested reader can find discussion of important issues encountered during development of well-known agent platforms such as JADE and Jadex as well as some interesting experiences in developing a new platform that combines software agent and Web Services. Furthermore, the book shows readers several valuable examples of applications based on multi-agent systems including simulations, agents in autonomous negotiations and agents in public administration modelling.

We believe that the book will prove useful to the researchers, professors and the practitioners in all disciplines including science and technology. The volume is aimed to exploit the conceptual and algorithmic framework of Computational Intelligence CI to form a cohesive and comprehensive environment for building models of time series.

The contributions covered in the volume are fully reflective of the wealth of the CI technologies by bringing together ideas, algorithms, and numeric studies, which convincingly demonstrate their relevance, maturity and visible usefulness. It reflects upon the truly remarkable diversity of methodological and algorithmic approaches and case studies. This volume is aimed at a broad audience of researchers and practitioners engaged in various branches of operations research, management, social sciences, engineering, and economics. Owing to the nature of the material being covered and a way it has been arranged, it establishes a comprehensive and timely picture of the ongoing pursuits in the area and fosters further developments.

The book is divided in four parts, the first of which focuses on clustering and classification. The second part puts the spotlight on multisets, bags, fuzzy bags and other fuzzy extensions, while the third deals with rough sets. Rounding out the coverage, the last part explores fuzzy sets and decision-making. This book consists of 16 contributed chapters by subject experts who are specialized in the various topics addressed in this book. The special chapters have been brought out in the broad areas of Control Systems, Power Electronics, Computer Science, Information Technology, modeling and engineering applications.

Special importance was given to chapters offering practical solutions and novel methods for the recent research problems in the main areas of this book, viz. This book will serve as a reference book for graduate students and researchers with a basic knowledge of control theory, computer science and soft-computing techniques.

It brings new approaches and methods to real-world problems and exploratory research that describes novel approaches in the mathematical methods, computational intelligence methods and software engineering in the scope of the intelligent systems. This book constitutes the refereed proceedings of the Computational Methods in Systems and Software , a conference that provided an international forum for the discussion of the latest high-quality research results in all areas related to computational methods, statistics, cybernetics and software engineering.

Account Options Sign in. Top charts. New arrivals. Andrzej Skowron Zbigniew Suraj August 16, He is the founder of the Polish school of Artificial Intelligence and one of the pioneers in Computer Engineering and Computer Science with worldwide influence. He was a truly great scientist, researcher, teacher and a human being. We are proud to offer the readers this book.

  • Periplus Mini Cookbooks: Homestyle Japanese Cooking.
  • Time-Constrained Evaluation: A Practical Approach for LEAs and Schools (International Library of Psychology);
  • Product details?
  • Read at your own risk, it could change your life...;
  • PBN - Polska Bibliografia Naukowa!
  • Structural Adjustment in Africa;

Reviews Review Policy. Published on. Original pages.

  • Stanford Libraries.
  • Church and Society in England, 1000-1500 (Social History in Perspective).
  • Public Sphere Reconsidered: Theories and Practices.
  • The Language Phenomenon: Human Communication from Milliseconds to Millennia.
  • Rough Sets and Intelligent Systems - Professor Zdzisław Pawlak in Memoriam - CERN Document Server.

Best For. Web, Tablet. Content Protection. Flag as inappropriate. It syncs automatically with your account and allows you to read online or offline wherever you are. Please follow the detailed Help center instructions to transfer the files to supported eReaders. The equivalence class specified by the object x i with respect to R B is denoted as [ x i ] B.

The goal of the CRSA is to provide a definition of a concept according to the values of the attributes of the equivalence classes that contain objects that are known instantiations of the concept. As such, in a consistent decision table, membership in a conditional class implies membership in a particular decision class. To represent an inconsistent decision table, the CRSA establishes an upper and lower approximation for each decision class, Y.

The lower approximation is comprised of all objects that definitely belong to Y , while the upper approximation includes all objects that possibly belong to Y. Thus, the lower and upper approximations are defined as follows:.

Bestselling Series

The quality of approximation is expressed as:. In its place, the DRSA introduces a new dominance relation that allows for ordinal attributes with preference-ordered domains wherein a monotonic relationship exists between the attribute and the decision classes. To differentiate between attributes with and without a preference-ordered domain, those with a preference order are called criteria while those without are referred to as attributes, as in the CRSA.

This relation is straightforward for gain-type criteria the more, the better , and can be easily reversed for cost-type criteria the less, the better. Criterion preference relations are then organized in the direction of the decision class; values which generally contribute to the incidence of coronary disease are preferred over those which indicate lower risk, much in the same way that a positive diagnosis indicates presence of coronary disease.

No such preference relation exists for Gender ; as such, it is considered an attribute. The decision classes are preference-ordered according to the decision maker, i. Therefore, each patient in Y 2 is preferred to each patient in Y 1. Upward and downward unions of classes are defined as:. Considering the dominance cones, the lower and upper approximations of the union of decision classes are defined as follows. The upper approximations represent objects that possibly belong to one of the upward or downward unions of decision classes. By allowing inconsistencies, the VC-DRSA avoids over fitting the training set and thus may be more effective in classifying new cases.

There are a number of methods available for induction of decision rules from the lower or upper approximations of the decision classes [ 40 — 42 ] or from reducts extracted from the decision table [ 43 ]. In both cases, decision rules are induced from approximations of decision classes. Once decision rules have been induced, the collection of these rules can then be used to classify unseen objects—in the case of our example table, a new patient who may have cardiac disease. The antecedent is a logical conjunction of descriptors and the consequent is the decision class or union of decision classes suggested by the rule.

Formally, in the CRSA, decision rules are generated from the lower or upper approximations. In the DRSA, decision rules are induced from the lower approximations and the boundaries of the union of decision classes. From the lower approximations, two types of decision rules are considered. Subsequent iterations again select the best decision rule and remove the covered objects until reaching a stopping criteria or until all of the objects in the unions of decision classes are described by a rule in the rule set.

To ensure minimality, antecedent descriptors, called elementary conditions, of each rule are checked at each iteration and redundant elementary conditions are removed. Similarly, redundant rules are removed from the final rule set.

Ismail Uyanik: Subspace Identification of LTP Systems

In both algorithms, decision rules are grown by consecutively adding the best available elementary condition to the rule. MODLEM does not restrict elementary conditions to those attributes not currently in the rule; as such, multiple elementary conditions may contain the same attribute. Therefore, a decision rule induced by MODLEM may contain antecedents in which attribute values are described as belonging to a range or a set of values or as being greater or less than a particular value. Dominance-based elementary conditions are evaluated according to a rule consistency measure.

To classify an unseen object, a standard voting process [ 43 ] is used to allow all rules to participate in the decision process, arriving at a patient classification by majority vote. Each rule is characterized by two support metrics. The left hand side LHS support is the number of patients in the table whose attributes match the antecedent, i. A new patient matching the antecedent of this rule will receive two votes for decision class Yes and zero votes for decision class No. The resultant ratio of RHS to LHS support is considered a frequency-based estimate of the probability that the patient belongs to the given decision class.

For example, if the threshold value is set as 0. SUPPORT enrolled patients, 18 years or older, who met specific criteria for one of nine serious illnesses, who survived more than 48 hours but were not discharged within 72 hours. Patients were followed such that survival and functional status were known for days after entry. The result of the SUPPORT study is a prognostic model for day survival estimation of seriously ill hospitalized adults based on cubic splines and a Cox regression model.

Given the inclusion criteria described in full in Appendix 1 of [ 12 ] , the dataset is ideal for the present research in regards to clinical applicability, completeness of data, and comparability of results. Attribute names, descriptions and value ranges are listed in Table 2. Figure 1 shows the patients Kaplan-Meier survival curve with respect to number of days until death. General observations regarding the influence of condition attributes can be made by analyzing their relation in the proportion of patients surviving the six month period.

For example, the Kaplan-Meier survival curve in Fig. Missing physiological attribute values are filled in with a standard fill-in value representing a normal physiological response, as provided by the SUPPORT authors in [ 48 ]. The two incomplete cases were removed and the remaining 9, cases were considered in the development of the prognostic models. Discretization is the process by which appropriate categorical ranges are found for variables with a continuous value range.

There are a number of methods available for unsupervised discretization that operate without input from the decision maker and are based only on the information available in the data table. This choice is founded on the proposition that expert discretization via APACHE III will result in medically and contextually relevant classification rules and data collection requirements, thus increasing the accessibility of the proposed prognostic model and ensures directly comparable rule sets for all evaluated rule-based methods. APACHE III scores for any given variable are close to zero for normal or only slightly abnormal values of that variable and increase according to increased severity of disease.

For example, normal pulse rates of 50—99 bpm are given a score of 0, while elevated and lowered levels, — and 40—49 bpm respectively, are both given a score of 5. Thus, higher APACHE III scores are preferred to lower scores, as the higher scores indicate greater severity of disease and therefore greater risk of death within six months considered the positive diagnosis. The remaining physiologic variables not included in APACHE III—neurologic function, scoma , and blood gasses, pafi —were discretized using clinically accepted categorizations [ 49 , 50 ].

The variable hday was discretized using the boolean reasoning algorithm [ 43 ]. Table 3 shows the categories defined in this process.

Buy Rough Sets And Intelligent Systems Professor Zdzisław Pawlak In Memoriam Volume 2

Higher values of each of these variables are preferred to lower values. This section provides details on the implementation and performance evaluation procedures for the comparison of the classification methods used in this study. The following two sections, describe the RSA and comparative methods respectively, the software used for their implementation and the selection of appropriate parameters for each of the methods.

Finally, the methods for performance evaluation are discussed. The general schema of the experimental design is as follows: after selecting appropriate parameters for each of the methods, 5-fold cross validation was used to divide the data into training and testing sets. Methods with decision rule outputs were trained and tested on the discretized data set to demonstrate expected performance of a clinically credible rule set.


Methods without decision rule outputs were trained on the raw, non-discretized, data set. For these methods, designed to be applied to continuous variables, discretization does not improve clinical credibility and would likely hinder performance [ 51 , 52 ]. The rule syntax follows the presentation in section Decision rules. VC-DomLEM decision rules were generated from the lower approximation of each decision class, with an object consistency level threshold l. Note that the rule consistency threshold and the object consistency threshold are equal and set at l. In order to select the most appropriate models for comparison, the performance of the rough set based models was evaluated for varying levels of rule consistency, m and l , for the CRSA and VC-DRSA respectively.

Classifier performance at a particular value of m or l is dataset-dependent; however, in general, values close to one provide rule sets that are more conservative in describing the training set objects, while values closer to zero provide rule sets that are more general.

Rough Set Approximations : a Concept Analysis Point Of

To ensure directly comparable rule sets, C4. Each of these methodologies was applied using the software package Weka 3. Logistic regression was selected for its popularity in classification models using non-censored data and in clinical settings [ 18 , 56 ]. Support vector machines, originally presented in [ 57 ], find separating boundaries between decision classes after input vectors are non-linearly mapped into a high dimensional feature space.

Support vector machines have been investigated in survival analysis applications [ 58 ] as they—similar to the RSA-based methods—automatically incorporate non-linearities and do not make a priori assumptions about factor interactions. SVM-based models are known to perform well at classification tasks, however they do not provide clinician-interpretable justification for their results [ 59 ]. Support vector machines were selected to evaluate whether the increased accessibility of the RSA-based methods involves a trade-off in accuracy. A decision tree built by C4. Decision trees were obtained using the Weka J48 implementation [ 60 ] of the C4.

Random forests is a popular ensemble classification method based on decision trees [ 61 ]. The random forests algorithm builds an ensemble of decision trees, where each tree is built on bootstrap samples of training data with a randomly selected subset of factors. The performance of the models was tested by measuring the discriminatory power of both the m - and l -consistent decision rules sets when applied to the reserved testing data.

For our notation, a classification of d. Sensitivity is defined as the fraction of patients who did not survive six months and are correctly classified by the model, or the fraction of true positive classifications of all test patients who did not survive six months. Conversely, specificity is defined as the fraction of patients who did survive six months and were correctly classified by the model, or the fraction of true negatives of all test patients who did survive six months. The overall accuracy of the classification models is reported in terms of area under the receiver operating characteristic ROC curve, or AUC area under the curve.

Best separation between decision classes is realized at the threshold corresponding to the point on the ROC curve closest to the point 0,1. The coverage of the classification model is defined as the percentage of testing set patients for whom a classification is possible. Additionally, to evaluate the number of rules that would fire for an unseen patient, we collected information on the number of rules matching each test case patient for the evaluated levels of m and l.

A value of zero indicates classification accuracy equivalent to chance zero disagreement. Performance of the prognostic models was evaluated using a 5-fold cross validation procedure [ 63 ] wherein training and testing sets are repeatedly selected. Cross validation is a well known method that provides a reasonable estimate of the generalization error of a prediction model. The results are analyzed and compared. AUC and coverage for each evaluated m and l level are shown in Table 4. Figures 3 and 4 display the number of rules that fire for each patient in the five testing folds for each m and l value.

The quality of approximation is 0. Table 5 describes the number of rules and the number of descriptors in each rule for the two rough set approach-based classifiers at the selected consistency level of 0. The average number of MODLEM decision rules in the five rule sets generated by cross validation is rules, with mean and maximum length of 3. In Fig. This is because the rule set is generated by only two attributes and each rule contains only one attribute in the antecedent. For RF, the number of trees was explored between 10 and 1, trees at intervals of 10; the optimal number of trees thus obtained was The maximum number of attributes selected at each bootstrap iteration was also explored in the range of 1 to 15 attributes, with best performance observed when the number of attributes was limited to 1.

In the case of C4. The minimum number of instances per leaf for the C4. The pruned C4. Average sensitivity and specificity for each of the models are also shown in Table 6. For each model and cross validation fold configuration, the sensitivity and specificity were recorded at the threshold at which both values are simultaneously maximized. This threshold is equivalent to the point on the ROC plot closest to the upper left corner and represents the point of maximum accuracy of the model.

All of the methodologies show fair classification accuracy given that Kappa coefficients are in the range of 0. Together, Table 4 and Figs. The quality of approximation for the CRSA classifier is 0. In the case of the DRSA, a strict application of this information in determining the lower approximation leads to few patients in the lower approximations, thus reducing the overall quality of approximation. Consequently, decision rules generated from this approximation are too specific and less suitable for generalizing to the classification of new cases.

It is therefore reasonable to relax the conditions for assignment of objects to lower approximations. All of the rule- or decision-tree-based methods demonstrated somewhat reduced performance when compared with the non-rule based classifiers. Clinical credibility in prognostic models depends in part on the ease with which physicians and patients can understand and interpret the results of the models, in addition to the accuracy of the information they provide.

Product details

The RSA-based prognostic models present the physician with a list of matched decision rules, offering significant advantages by increasing both the traceability of the model and the amount of information included in its results. This advantage is further increased in the case of VC-DomLEM, where dominance-based decision rules permit greater information density per rule by including attribute value ranges in each rule. This patient was 41 years old with a primary diagnosis of coma.

The patient displayed moderate head injury on the Glasgow Coma Scale, elevated levels of creatinine 1. As can be seen in Table 7 , Rule 5 isolates the combination of Coma and elevated creatinine and sodium levels as a key predictor of six-month survival. In the case of Rule 5, 51 patients in the training set have similar conditions as the example patient, of which 47 did not survive six months. On the other hand, Rule 6 somewhat counterbalances this prediction, pointing to 8 young patients with moderate coma who have been in the hospital less than 44 days, of whom all 8 survived six months.

Upon further investigation, the rules matching the example patient Rules 1—4 are more general than the rules provided by the VC-DomLEM classifier. Rules 1—3 provide general rules that point to the age, level of head trauma and primary diagnosis of the patient. Considering only these three rules, the associated score would be d. Rule 4 isolates normal average heart beat, high respiratory rate and low and also very high white blood cell counts.