搜档网
当前位置:搜档网 › Detection of incorrect case assignments in automatically generated paraphrases of Japanese

Detection of incorrect case assignments in automatically generated paraphrases of Japanese

Detection of incorrect case assignments in automatically generated paraphrases of Japanese
Detection of incorrect case assignments in automatically generated paraphrases of Japanese

Detection of Incorrect Case Assignments

in Automatically Generated Paraphrases of Japanese Sentences

Atsushi Fujita Kentaro Inui Yuji Matsumoto

Graduate School of Information Science,

Nara Institute of Science and Technology

{atsush-f,inui,matsu}@is.aist-nara.ac.jp

Abstract

This paper addresses the issue of cor-

recting transfer errors in paraphrasing.

Our previous investigation into trans-

fer errors occurring in lexical and struc-

tural paraphrasing of Japanese sen-

tences revealed that case assignment

tends to be incorrect,irrespective of

the types of transfer(Fujita and Inui,

2003).Motivated by this observation,

we propose an empirical method to de-

tect incorrect case assignment.Our er-

ror detection model combines two er-

ror detection models.They are sepa-

rately trained on a large collection of

positive examples and a small collec-

tion of manually labeled negative ex-

amples.Experimental results show

that our combined model signi?cantly

enhances the baseline model which

is trained only on positive examples.

We also propose a selective sampling

scheme to reduce the cost of collecting

negative examples,and con?rm the ef-

fectiveness for the error detection task.

1Introduction

Recently,automatic paraphrasing has been at-tracting increasing attention due to its potential in a wide range of natural language processing ap-plication(NLPRS,2001;ACL,2003).For exam-ple,paraphrasing has been applied to pre-editing and post-editing in machine translation(Shirai et al.,1993),query expansion for question answer-ing(Ravichandran and Hovy,2002),and reading assistance(Carroll et al.,1999;Inui et al.,2003).

There are various levels of lexical and struc-tural paraphrasing as the following examples demonstrate1:

(1)s.He accomplished the mission perfectly.

t.He achieved the mission perfectly. (2)s.It was a Honda that John sold to Tom.

t.John sold a Honda to Tom.

In automating such paraphrasing,the dif?culty of specifying the applicability conditions of each paraphrasing pattern is one of the major prob-lems.For example,it is not easy to specify under what conditions“accomplish”can be paraphrased into“achieve”.Paraphrasing patterns with wrong applicability conditions would produce various types of erroneous paraphrases from input,which we call transfer errors.We thus need to develop a robust method to detect and correct transfer errors in the post-transfer process by way of a safety net. Our previous investigation revealed that case assignment tends to be a major error source in paraphrasing of Japanese sentences(Fujita and Inui,2003).Here is an example of incorrect case assignment:applying the paraphrasing rule“ac-complish?achieve”(cf.(1))to sentence(3s) generates(3t).But(3t)is incorrect,because the word“achieve”requires the words,such as“aim”,“record”and“success”,for its direct object. (3)s.He accomplished the journey in an hour.

t.?He achieved the journey in an hour. One may suspect that incorrect case assignment can be detected simply by referring to a hand-crafted case frame dictionary which describes al-lowable cases and their selectional restrictions for 1For each example,s denotes an input and t denotes its paraphrase.Note that our target language is Japanese.En-glish examples are used here for an explanatory purpose.

each verb.However,in existing case frame dic-tionaries of Japanese,selectional restrictions are generally speci?ed based on coarse-grained se-mantic classes of noun.It is therefore not ade-quate for the purpose of the detection of incor-rect case assignments(the detail will be given in Section2).

To capture even the difference between the us-ages of near-synonyms,we deal with words di-rectly instead of relying on their semantic classes. Since a considerably large number of positive ex-amples can be collected from existing corpora, one can construct a statistical language model and apply it to the error detection task.In this paper,to enhance such a statistical language model,we in-troduce the use of negative examples and address the following two issues:

1.Unlike positive examples,negative examples

are generally not available.A challenging is-

sue is therefore how to effectively use a lim-

ited number of manually collected negative

examples combining with a large number of

positive examples.

2.Manual collection of negative examples is

costly and time-consuming.Moreover,any

such collection is sparse in the combinatorial

space of words.Hence,we need an effective

labeling scheme to collect negative examples

that truly contribute to error detection.

2Incorrect case assignment

2.1Frequency

In(Fujita and Inui,2003),we investigated trans-fer errors in Japanese from two points of view: (i)what types of errors occur in performing lex-ical and structural paraphrasing of Japanese sen-tences,and(ii)which of them tend to be serious problem.We implemented about28,000para-phrasing rules2consisting of various levels of lex-ical and structural paraphrasing,and analyzed630 automatically generated sentences.

An important observation in(Fujita and Inui, 2003)is that case assignment tends to be incor-rect,irrespective of the types of paraphrasing.A quarter of the paraphrased sentences(162/630) exhibit this type of errors.This ratio indicates the second most frequent errors,whereas the most dominant type is inappropriate conjugation forms 2http://cl.aist-nara.ac.jp/lab/kura/KuraData/of verbs and adjectives(303/630)3,which can be easily corrected by revising the conjugation forms.

2.2Causes of errors

At least in Japanese,case assignment can be in-correct at three different levels:

(i)Violation of syntactic constraints:Though both of the verbs“tessuru”and“tsuranuku”have the same meaning“devote”in the context of ex-ample(4),the paraphrased sentence(4t)is incor-rect because“tsuranuku”cannot take the“ni(da-tive)”case.

(4)s.Team play-ni tessuru.

team play-DAT devote-PRES

He devotes himself to team play.

t.?Team play-ni tsuranuku.

team play-DAT devote-PRES

(ii)Violation of selectional restrictions:The verb“katameru(strengthen)”requires a concrete object for its“o(accusative)”case.Since the noun“kontei(basis)”in the paraphrased sentence (5t)does not satisfy the constraint,(5t)is incor-rect.

(5)s.Building-no kiban-o katameta.

building-GEN foundation-ACC strengthen-PAST

He strengthened the foundation of the building.

t.?Building-no kontei-o katameta.

building-GEN basis-ACC strengthen-PAST

?He strengthened the basis of the building. (iii)Semantic inconsistency between sibling cases:The nouns“hyogen(expressions)”and “kakuchi(every land)”satisfy the semantic con-straint for“ga(nominative)”and“ni(locative)”cases of the verb“aru(exist)”,respectively.Nev-ertheless,the paraphrased sentence(6t)is incor-rect,because the meanings described by the sib-ling cases are semantically inconsistent.

(6)s.Nankai-na hyogen-ga

crabbed-ADJ expressions-NOM

zuisho-ni aru.

many places-LOC exist-PRES

There are crabbed expressions in many places.

3The third most frequent error was incorrect functional word connections that occurred in78sentences.The other errors occurred in less than40sentences.

t.?Nankai-na hyogen-ga

crabbed-ADJ expressions-NOM

kakuchi-ni aru.

every land-LOC exist-PRES

?There are crabbed expressions in every land.

2.3Task setting

Supposing that the case assignments in input sen-tences into paraphrasing are all correct,the target of error detection is to detect anomalies yielded in the paraphrased case structures that consist of a verb,case particles,and case?llers(nouns).To handle them,we assume dependency structures due to the following reasons:

?For english,linear structure-based statistics,

e.g.,n-grams can predict human plausibil-

ity judgments,namely,correctness(Lapata

et al.,2001;Keller et al.,2002).How-

ever,there is no guarantee that they perform

well in Japanese,because word ordering in

Japanese is relatively unrestricted compared

with English.

?Most of the paraphrasing systems for

Japanese deal with dependency structures

(Kondo et al.,2001;Takahashi et al.,2001;

Kaji et al.,2002).That is,such a system

generates paraphrases annotated with a de-

pendency structure,whatever transfer error

occurs.

As mentioned in Section1,existing case frame dictionaries tend to specify selectional restrictions relying on a coarse-grained semantic typology. For example,the difference between two near-synonyms“kiban(foundation)”and“kontei(ba-sis)”is crucial in the context of example(5),but most of the dictionaries do not distinguish them, classifying them as the same semantic class“ba-sis”.Such a dictionary is not adequate for detec-tion of incorrect case assignment.

Instead,we deal with words directly.Let v,n and c be a verb,a noun and the case particle which connects v and n,respectively.We reduce the er-ror detection task into the classi?cation of triplet v,c,n into correct or incorrect.A given para-phrased sentence is judged to be incorrect if and only if any of the triplets included in the sentence is classi?ed as incorrect.

By dealing with v,c1,n1,c2,n2 to take into account the association between two sibling cases,as in(Torisawa,2002),we might be able to detect semantic inconsistency as(6t)exhibits.

However,considering that the sibling cases could

rarely be semantically inconsistent4and building

a distribution model of v,c1,n1,c2,n2 is likely to cause a data sparseness problem,we build a er-

ror detection model taking only v,c,n into ac-

count.

3Error detection models

3.1Issues

The error detection task looks similar to statisti-

cal machine translation in the sense that both in-

volve the process of evaluating the appropriate-

ness of given sentences(e.g.,(Knight and Chan-

der,1994)).In statistical machine translation,

systems use statistics to compare output candi-

dates.Therefore,what is needed to estimate is

relative likelihood.For error detection in para-

phrasing,however,we need a model for judging

the absolute correctness of each output candidate

for the following reason.Paraphrasing systems

are developed typically for a particular purpose

such as simplifying text and controlling word-

ing.In such systems,the variety of paraphrasing

rules tends to be restricted;so the rule set some-

times produces no appropriate paraphrase candi-

date for a given input sentence.An error detection

model therefore needs the ability to not only com-

pare candidates but also give up producing output

when none of the candidates is correct.

If error detection is de?ned as the task of clas-

sifying the candidates as correct or incorrect,one

may want to use both positive and negative exam-

ples to train a classi?er.However,positive and

negative examples are signi?cantly imbalanced,

and any collection of negative examples is likely

to be too small to represent the distribution of

the negative class.Therefore,it is probably not

a good choice to input them into a single clas-

si?er induction algorithm such as support vector

machines.

Instead,we separately train two models,the

positive model(Pos)and the negative model

(Neg)as illustrated in Figure1,then combine

them to create another model(Com).Since nega-

tive examples have to be collected by hand,we

also investigate the effectiveness of a selective 4According to the analysis in(Fujita and Inui,2003), only8cases of the162incorrect case assignments had se-mantically inconsistent sibling cases.

candidates

Input sentences

Figure1:Model construction scheme. sampling scheme to reduce human labor.

The rest of this section elaborates on our error detection model and selective sampling scheme.

3.2Combining separately trained models 3.2.1Positive model

Since a considerably large number of positive examples can be collected from existing corpora using a parser,one can estimate the probability P( v,c,n )with reasonable accuracy.On that account,we?rst construct a baseline model Pos, a statistical language model trained only on the positive examples.

To calculate P( v,c,n )avoiding the data sparseness problem,one can use Probabilistic La-tent Semantic Indexing(PLSI)(Hofmann,1999) which bases itself on distributional clustering (Pereira et al.,1993).PLSI is a maximum likeli-hood estimation method.Dividing5 v,c,n into v,c and n,one can estimate P( v,c,n )by:

P( v,c,n )=

z∈Z

P( v,c |z)P(n|z)P(z),

where Z denotes a set of latent classes of co-occurrence,and probabilistic parameters P( v,c |z),P(n|z),and P(z)can be estimated by the EM algorithm.

Given P( v,c,n ),we can use various co-occurrence measures to estimate the likelihood of 5P( v,c,n )can be represented by the product of P( v,c )and P(n| v,c ).Both of the marginal distribu-tions corresponds existing linguistic concept;the former in-dicates the likelihood of a case structure,while the latter does the satisfaction degree of semantic constraint.a given pair of v,c and n.Well-known options

are P( v,c,n )(Prob),mutual information(MI),

and the Dice coef?cient(Dice).

3.2.2Negative model

Pos might not be able to properly judge the cor-rectness of v,c,n by setting a simple threshold, particularly in cases where P( v,c )or P(n)is low.This defect is expected to be compensated

for by the use of negative examples.However,

we cannot incorporate negative examples into the statistical language model directly.We thus con-struct a negative model Neg separately from Pos.

One simple way of using negative examples is

the k-nearest neighbor(k-NN)averaging method. Assuming that the distance between an input triplet v,c,n and a labeled negative example

v ,c ,n depends on both the distance between

v,c and v ,c and the distance between n and

n ,we formulate the following distance function: Dist( v,c,n , v ,c ,n )=DS

P(Z|n),P(Z|n )

+DS

P(Z| v,c ),P(Z| v ,c )

, Here,P(Z| v,c )and P(Z|n)are the feature vectors for v,c and n.These probability dis-tributions are obtained through the EM algorithm

for Pos,and the function DS denotes distribu-tional similarity between two probability distri-butions.One popular measure of distributional similarity is Jensen-Shannon divergence(DS JS), which is examined in(Lapata et al.,2001;Lee, 2001).Given the pair of probability distributions

q and r,DS JS is given by:

DS JS(q,r)=

1

2

D

q

q+r

2

+D

r

q+r

2

,

where the function D is the Kullback-Leibler divergence.Therefore,DS JS is always non-negative,and DS JS=0iff q=r.

Given an input v,c,n ,Neg outputs the weighted average distance Score Neg between the input and its k nearest neighbors as the score in-dicating the degree of correctness.Formally,

Score Neg=

1

k

k

i=1

λi Dist( v,c,n , v ,c ,n i),

whereλi is the weight for v ,c ,n i,the i-th nearest neighbor.

3.2.3Combined model

Given the pair of scores output by Pos and Neg,

our error detection model Com converts them into

normalized con?dence values C P os and C Neg

(0≤C P os,C Neg≤1).Each normalization func-tion can be derived using development data(see

Section4).Com then outputs the weighted aver-

age of C P os and C Neg as the overall score: Score Com=βC P os+(1?β)C Neg, where0≤β≤1determines the weights of the models.Score Com indicates the degree of cor-rectness.

3.3Selective sampling of negative data

We need negative examples that are expected to

be useful in improving Neg and Com.For the

current purpose,an example is not useful if it is

positive.An example is not useful,either,if it is

similar to any of the known negative examples.In

other words,we prefer negative examples that are

not similar to any existing labeled negative exam-

ple.We henceforth refer to unlabeled instances as

samples,and labeled ones as examples.

Our strategy for selecting samples can be im-

plemented straightforwardly.We use Pos to es-

timate how likely a sample is negative.To com-

pute the similarity between an unlabeled sample

and labeled examples,we use Neg.Let p x be

the estimated probability of an unlabeled sample x,and s x(>0)be the similarity between x and its nearest negative example.The preference for a given sample x is given by,e.g.,P ref(x)=?s x log(p x),which we use in the experiments be-low.

Our selective sampling scheme is as follows: 1.Generate a set of paraphrases by apply-

ing paraphrasing rules to sentences sampled

from documents in a given target domain.

2.Extract a set of triplets from the set of para-

phrases.We call it a sample pool.

3.Sample a small number of triplets randomly

from the sample pool,and label them manu-

https://www.sodocs.net/doc/297544061.html,e only negative samples as the seed

of the negative example set for Neg.

4.For each sample in the sample pool,calcu-

late its preference by P ref given above. 5.Select the most preferred sample,and label

it manually.If it is negative,add it into the

negative example set.

6.Repeat Steps4and5until a certain stop-

ping condition is satis?ed(for example,the

performance for development data is con-

verged).

4Experiments

4.1Data

We trained Pos and Neg in the following way (Also see Figure1).During this process,para-phrase candidates were constructed for evaluation as well.

1.53million tokens(8.0million types)of

triplets v,c,n were collected from the

parsed6sentences of newspaper articles7. 2.Triplets occurring only once were?ltered

out.To handle case alteration properly,we

dealt with active and passive forms of verbs

separately.We restricted c to be the most

frequent seven case particles:“ga(NOM)”,

“o(ACC)”,“ni(DAT)”,“de(LOC)”,“e(to)”,

“kara(from)”,and“yori(from/than)”.This

procedure resulted in3.1million types of

triplets consisting of38,512types of n and

66,484of v,c .

3.We estimated the probabilistic parameters of

PLSI by applying the EM algorithm8to the

data,changing the number of latent classes

|Z|from2through1,500.

4.To develop a negative example set,we ex-

cerpted90,000sentences from the news-

paper articles used in Step1,input them

into a paraphrasing system for Japanese9,

and obtained7,167paraphrase candidates by

applying the same paraphrasing rules that

were used for our previous investigation into

transfer errors(Fujita and Inui,2003).

5.We?ltered out the generated candidates that

contain no changed case structure and those

that include either v or n with a frequency

of less than2,000in the collection given in

Step1.As a result,3,166candidates re-

mained.

6We used the statistical Japanese dependency parser CaboCha(Kudo and Matsumoto,2002)for parsing.

http://cl.aist-nara.ac.jp/?taku-ku/software/cabocha/ 7Extracts from9years of the Mainichi Shinbun and10 years of the Nihon Keizai Shinbun consisting of25,061,504 sentences are used.

8http://cl.aist-nara.ac.jp/?taku-ku/software/plsi/

9We used K URA(Takahashi et al.,2001).

http://cl.aist-nara.ac.jp/lab/kura/doc/

0.1

0.2

0.3

0.4

0.50.6

0.7

0.8

0.9

1

P r e c i s i o n

Recall

Figure 2:R -P curves of baseline models.6.Finally,we manually labeled the 3,166can-didates and their triplets.We obtained (i)2,358positive and 808(25.5%)negative candidates 10,and (ii)3,704types of triplets consisting of 2,853positive and 851nega-tive.The former set was used for evaluation,while the latter was used for training Neg .4.2Evaluation measures

For evaluation,we compared the performance of Pos ,Neg ,and Com .For each model,we set a threshold and used it so that a given input was classi?ed as erroneous if and only if it received a lower score than the threshold.Given such a threshold,recall R and precision P of a model are de?ned as follows:R =#of correctly detected erroneous candidates #of erroneous candidates

,

P

=

#of correctly detected erroneous candidates #of candidates the model classi?ed as erroneous

.

While we could estimate the optimal threshold for each model,in the experiments,we plotted recall-precision (R -P )curves by varying the threshold.To summarize a R -P curve,we used 11-point average precision (11-point precision ,hereafter)where the eleven points are R =0.0,0.1,...,1.0.To compare R -P curves,we conducted Wilcoxon rank-sum test using precision at eleven point above,assuming p <0.05as the signi?cancy level.4.3Results

4.3.1Baseline

First,to illustrate the

complexity of the task,we show the performance of the baseline mod-10

41out of 808were incorrect due to semantic inconsis-tency between sibling cases.

2

5

10

20

50

100

200

500

1000

11-p o i n t a v e r a g e p r e c i s i o n

# of latent classes |Z|

Figure 3:11-point precision of models over |Z |.els:a dictionary-based model,a word-based naive smoothing model,and our statistical language model Pos .We regard Pos as a baseline because our concern is to what extent Pos can be en-hanced by introducing Neg and Com .For the case frame dictionary,we used the largest Japanese case frame dictionary,the NTT Japanese Lexicon (Ikehara et al.,1997)(Dic ),and the Good-Turing estimation (GT )for the naive smoothing model.As shown in Figure 2,Pos signi?cantly outper-forms both Dic and GT .Prob ,MI and Dice with |Z |=1,000achieve 65.6%,69.2%and 67.5%11-point precision,while Dic achieves 41.9%pre-cision under 61.6%recall 11,and MI and Dice based on GT achieve 51.9%and 58.0%11-point precision 12.Regarding Pos ,Prob outperforms MI and Dice for lower recall,while MI and Dice out-perform Prob for higher recall.But there is no signi?cant difference among them.

The classi?cation performance of Pos is shown over the number of latent classes |Z |in Figure 3.The larger |Z |achieves higher 11-point precision.However,overly enlarging |Z |will presumably not work well since the performance of Pos hits a ceiling.Since the optimal |Z |relies on the lex-icon,we need to estimate it for a given lexicon using development data.However,since the per-formance distribution over |Z |is so moderate,we can optimize |Z |with a reasonable cost.

11

Dic classi?es a given v,c,n as correct or not if and only if both v and n is described in the dictionary.In our experiment,since 338paraphrase candidates (10.7%)are not judged,we calculated recall and precision using judged 2,828candidates.12

Notice that Prob based on GT does not perform for a lower recall (R ≤0.66,in our experiment)because it does not distinguish the triplets that have the same frequency.

0.7

0.720.740.760.780.80.8210050010001500200025003000

3704

100

200300400500600700800900100011-p o i n t a v e r a g e p r e c i s i o n (l i n e s )

# o f o b t a i n e d n e g a t i v e e x a m p l e s (b a r s )

# of sampled examples

Selective Sampling Random Sampling

Figure 4:Learning curves of Com :curves:11-point precision,bars:#of obtained negative ex-amples.4.3.2

Properties of negative model

Neg was evaluated by conducting 5-fold cross-validation over the labeled negative examples to keep training and test data exclusive.The weight-ing function λi for i -th nearest neighbor is set to 1/i ,the reciprocal of the similarity rank.The 11-point precision for combinations of parame-ters are shown in Figure 3.In contrast to Pos ,Neg achieves the best with small |Z |.This is good news because a larger number of |Z |obliges a higher computation cost for calculating each dis-tance.With regard to the number of consulting neighbors k ,the 11-point precision peaks at k =1.We speculate that the combinatorial space is so large that a larger k causes more noise.Hence,we can conclude that k =1is enough for this task.The performance of Neg may seem too high given the number of negative examples we used.However,it is not necessarily unlikely.Recall that the set of paraphrasing rules we used was built for the purpose of text simpli?cation.In such a case,presumably the variety of triplets involved in gen-erated paraphrases is relatively small.Therefore,a limited number of negative examples suf?ces to cover the negative classes.This is expected to be a common property in applied paraphrasing sys-tems as mentioned in Section 3.1.4.3.3

Combining models with selectively sampled examples

To evaluate the effectiveness of (a)combining Pos with Neg and (b)selective sampling,we con-ducted simulations using the 3,704types of la-beled triplets.

We ?rst randomly sampled two sets of 100

0.1

0.2

0.3

0.4

0.50.6

0.7

0.8

0.9

1

P r e c i s i o n

Recall

Figure 5:R -P curves of our models.samples from 3,704labeled triplets.One involved 16negative examples and the other 22.We used these two different sets of negative examples as seeds of the negative example set.We then con-ducted the selective sampling scheme for each seed,regarding the remaining 3,604triplets as the sample pool.Parameters and metrics employed are Prob and |Z |=1,000for Pos ,|Z |=20and k =1for Neg .

In each stage of selective sampling (learning),we formed a combined model Com ,employing the parameters and metrics on which each com-ponent model performed best,i.e.,MI and |Z |=1,000for Pos ,and |Z |=20and k =1for Neg .Combining ratio βwas set to 0.5just for averag-ing.We then evaluated Com by conducting 5-fold cross-validations as well as for Neg .

Figure 4compares the performance of selective and random sampling,showing the averaged re-sults for two seeds.In the ?gure,the horizon-tal axis denotes the number of sampled exam-ples.The bars in the ?gure denote the number of obtained negative examples,showing that pref-erence function ef?ciently selects negative exam-ples compared to random sampling.The curves in the ?gure denote the performance curves which show a remarkable advantage of selective sam-pling,particularly in the early stage of learning.Figure 5shows the R -P curves of Pos ,Neg ,and Com .Com surpasses Pos and Neg over all ranges of recall.One can see that the models based on selective sampling exhibit R -P curves as nicely as the model with the largest negative example set.It is therefore con?rmed that even if the collection of negative examples are not suf?-cient to represent the distribution of the negative classes,we can enhance the baseline model Pos

by combining it with Neg.With the largest nega-tive examples,Com achieved81.3%11-point pre-cision,a12.1point improvement over Pos.Con-cerning the optimalβwhich depends on the set of negative examples,it can be easily estimated us-ing development data.For the present settings, the performance peaks when a slightly greater weight is given to Neg,i.e.,β=0.45.However, there is no signi?cant difference in performance betweenβ=0.45and0.5.Hence,we can regard 0.5as the default value forβ.

5Conclusions

We addressed the task of detecting incorrect case assignment,a major error source in paraphrasing of Japanese sentences.Our proposal are:(i)an empirical method to detect incorrect case assign-ments,where we enhanced a statistical language model by combining it with another model which was trained only on a small collection of negative examples,and(ii)a selective sampling scheme for effective collection of negative examples.Our methods were justi?ed through empirical experi-ments.

Since our aim is to generate correct para-phrases,we should correct the detected errors.In (Fujita and Inui,2003),we observed that a small part of incorrect case assignments(22/162)could be corrected by replacing case markers with the other ones,while the remaining large part could not be.Furthermore,even if we could correct all incorrect case assignments,other types of errors would still be in the paraphrased sentences.We thus think that coping with various type of errors is more important.The errors discussed in this paper appear on relatively shallow levels of syn-tax and semantics.Our next challenge will go to a deeper level such as the veri?cation of whether meaning is preserved or not.

References

ACL.2003.The2nd International Workshop on Paraphras-ing:Paraphrase Acquisition and Applications(IWP). J.Carroll,G.Minnen,D.Pearce,Y.Canning,S.Devlin,and J.Tait.1999.Simplifying text for language-impaired readers.In Proceedings of the9th Conference of the European Chapter of the Association for Computational Linguistics(EACL),pages269–270.

A.Fujita and K.Inui.2003.Exploring transfer errors in

lexical and structural paraphrasing.Journal of Informa-tion Processing Society of Japan,44(11):2826–2838.(in Japanese).T.Hofmann.1999.Probabilistic latent semantic indexing.

In Proceedings of the22nd Annual International ACM SIGIR Conference on Research and Development in In-formation Retrieval(SIGIR),pages50–57.

S.Ikehara,M.Miyazaki,S.Shirai,A.Yokoo,H.Nakaiwa, K.Ogura,Y.Ooyama,and Y.Hayashi,editors.1997.

Nihongo Goi Taikei–A Japanese Lexicon.Iwanami Shoten.(in Japanese).

K.Inui,A.Fujita,T.Takahashi,R.Iida,and T.Iwakura.

2003.Text simpli?cation for reading assistance:a project note.In Proceedings of the2nd International Workshop on Paraphrasing:Paraphrase Acquisition and Applica-tions(IWP),pages9–16.

N.Kaji,D.Kawahara,S.Kurohashi,and S.Sato.2002.

Verb paraphrase based on case frame alignment.In Pro-ceedings of the40th Annual Meeting of the Association for Computational Linguistics(ACL),pages215–222. F.Keller,https://www.sodocs.net/doc/297544061.html,pata,and https://www.sodocs.net/doc/297544061.html,ing the

Web to overcome data sparseness.In Proceedings of the 2002Conference on Empirical Methods in Natural Lan-guage Processing(EMNLP),pages230–237.

K.Knight and I.Chander.1994.Automated postediting of documents.In Proceedings of the12th National Confer-ence on Arti?cial Intelligence(AAAI),pages779–784. K.Kondo,S.Sato,and M.Okumura.2001.Paraphrasing by case alternation.Journal of Information Processing Society of Japan,42(3):465–477.(in Japanese).

T.Kudo and Y.Matsumoto.2002.Japanese dependency analysis using cascaded chunking.In Proceedings of 6th Conference on Natural Language Learning(CoNLL), pages63–69.

https://www.sodocs.net/doc/297544061.html,pata,F.Keller,and S.McDonald.2001.Evaluating smoothing algorithms against plausibility judgements.In Proceedings of the39th Annual Meeting of the Associ-ation for Computational Linguistics(ACL),pages346–353.

L.Lee.2001.On the effectiveness of the skew divergence for statistical language analysis.In Proceedings of the 8th International Workshop on Arti?cial Intelligence and Statistics,pages65–72.

NLPRS.2001.Workshop on Automatic Paraphrasing:The-ories and Applications.

F.Pereira,N.Tishby,and L.Lee.1993.Distributional clus-

tering of English words.In Proceedings of the31st An-nual Meeting of the Association for Computational Lin-guistics(ACL),pages183–190.

D.Ravichandran and

E.Hovy.2002.Learning surface text

patterns for a question answering system.In Proceedings of the40th Annual Meeting of the Association for Com-putational Linguistics(ACL),pages215–222.

S.Shirai,S.Ikehara,and T.Kawaoka.1993.Effects of au-tomatic rewriting of source language within a Japanese to English MT system.In Proceedings of the5th Inter-national Conference on Theoretical and Methodological Issues in Machine Translation(TMI),pages226–239. T.Takahashi,T.Iwakura,R.Iida,A.Fujita,and K.Inui.

2001.K URA:a transfer-based lexico-structural para-phrasing engine.In Proceedings of the6th Natural Language Processing Paci?c Rim Symposium(NLPRS) Workshop on Automatic Paraphrasing:Theories and Ap-plications,pages37–46.

K.Torisawa.2002.An unsupervised learning method for associative relationships between verb phrases.In Pro-ceedings of the19th International Conference on Com-putational Linguistics(COLING),pages1009–1015.

VBA中的Select Case语句

Select Case语句也是条件语句之一,而且是功能最强大的条件语句。它主要用于多条件判断,而且其条件设置灵活、方便,在工作中使用频率极高。本节介绍Select Case语句的语法及应用案例. Select Case语句的语法如下: Select Case testexpression [Case expressionlist-n [statements-n]] ... [Case Else [elsestatements]] End Select Select Case语句包括四部分,每部分详细含义如表38-1所示。 在以上语法列表中,省略号代表可以使用多个条件。只要有一个Case就需要有一个statements-n,表示条件及符合条件时的执行条件。 其中elsestatements表示不符合指定条件时的执行语句,是可选参数。可以忽略elsestatements,也可以执行一条或者一组语句,为了让程序能够处理一些不可预见的情况,尽量使用elsestatements语句处理不符合条件时该如何回应。 在Select Case的多个参数中,最复杂的是expressionlist-n部分,它有多种表达形式,包括:lExpression——直接声明一个条件值,例如5 lexpression To expression——声明一个条件的范围,例如5-10 lIs comparisonoperator——声明一种比较方式,例如is >5 下面的实例可以展示参数中expressionlist-n部分的多种表达形式。 实例1:多条件时间判断 根据当前的时间判断是上午、中午,还是下午、晚上、午夜。 要求中条件比较多,使IF…Then…需要多层嵌套,而Select Case语句会更简单。代码如下:___________________________________________________ Sub 时间() Dim Tim As Byte, msg As String

switch case用法示例

C语言——switch case语句 switch语句的语法如下(switch,case和default是关键字): switch ( 表达式) { case 常量表达式1 :语句1;break; case 常量表达式2 :语句2;break; …… case 常量表达式n :语句n;break; default:语句n+1;break; } 说明: break退出{ } case后面只能跟一个值(加冒号:),而不能是一个表达式 switch(表达式) 后面没有;(分号) { }前后没有;(分号) switch语句规则 case标签必须是常量表达式(constant Expression) 只能针对基本数据类型使用switch,这些类型包括int、char等。对于其他类型,则必须使用if语句。 case 标签后不能是浮点型数据(如1.0 ) case标签必须是惟一性的表达式;也就是说,不允许两个case具有相同的值。 不要问为什么,C就这规定!我们只需要了解规则,学会应用即可! 猜想原因:浮点型数据不是准确的数据,在机器中只能表示出一个近似值,如:0.1 在机器中存的是0.09999612.... 也可能是0.09999723.... 根据精度不同,数据来源不同,其值是个不确定的数据,因此,不能用CASE来定位 简单例题1:

#include void main(void) { int i=5; switch(i) { case 1: printf("%d",i); break; case 2+3: printf("%d",3); break; default : printf("%d",i); } } //程序运行效果是输出:3 经典例题2: #include void main(void) { switch('A') { case 'A': printf("A\n"); break; case 'B': printf("%d",3); break; default : printf("%d",7); } } //程序运行效果是输出:A 经典例题3: #include void main(void) { int i=5; switch(5) {

case语句

复习: 输入三个数a,b,c,输出最大数(分别用不嵌套if 和嵌套的if语句) (提问) 4.4 Case语句 [例4-4-1] 某服装公司为了推销产品,采取这样的批发销售方案:凡订购超过100 套的,每套定价为50元,否则每套价格为80元。编程由键盘输入订购套数,输出应付款的金额数。 解:设X为订购套数,Y为付款金额,则: Var x,y: integer; Begin Write('X=') ;Readln(x) ; if x >100 then y:=50*X else y:=80*X; Writeln('y=',y) ; Readln End. 如果有多种(两种或两种以上)选择,常用情况语句编程。 将前面[例]改成用如下方法来处理。根据题意,付款计算可分为两种情况: ①Y=50*X (X>100) ②Y=80*X (X<=100) 显然,情况①与②的选择取决于X值。假设用N表示“情况值”,暂且先让N =2; 如果X>100则N=1;(此题中N的值只是1或2,且取决于X值) Var X,Y,N: integer; Begin Write('X=') ;readln(x) ;n:=2;{ 先让n=2 } if X>100 then n:=1;{如果X>100则n=1 } Case n of { 关于情况处理} 1: Y:=50*X; 2: Y:=80*X; end; Writeln('Y=',Y) ; Readln End. 程序中的Case─end 语句为情况语句,是多路分支控制,一般格式为:

格式一: case 表达式of 情况常量表1:语句1; 情况常量表2:语句2; …… 情况常量表n:语句n; end; 格式一:执行情况语句时,先计算Case后面表达式的值,然后根据该值在情况常量表中的“对应安排”,选择其对应的语句执行,执行完所选择语句后就结束Case语句;如果常量表中没有一个与表达式值对应的语句,则什么也不做就结束本Case语句。 格式二: Case 表达式of 情况常量表1: 语句1; 情况常量表2: 语句2; …… 情况常量表n: 语句n; else 语句n+1 end; 格式二:这种格式的前面部分是相同的,所不同的是:如果常量表中没有一个与表达式值对应的语句,则执行与else对应的语句,然后结束Case语句。 case语句在使用时有几点要注意: 1. end与case对应;标号与语句之间用“:”分隔;else与语句之间不用分隔符。2.表达式的类型通常是整型或字符型。所以表达式经常会用到mod、div运算和trunc、round等函数。 3.情况常量表必须是一个或几个常量,常量间用“,”间隔,其类型与表达式的类型一致。常量间的顺序可以是任意的。也就是说可以多个标号对应同一条语句。 4. 语句可以是多个语句,但必须用语句括号(begin……end)括起即复合语句。 5. case语句也可以嵌套 [例4-4-2] 对某产品征收税金,在产值1万元以上征收税5%;在1万元以下但在5000元以上的征收税3%;在5000元以下但在1000元以上征收税2%;1000元以下的免收税。编程计算该产品的收税金额。 解:设x为产值,tax为税金,用P表示情况常量各值,以题意中每1000元为情况分界: P=0: tax=0 (x<1000 ) P=1,2,3,4: tax=x*0.02 (1000<=x<5000 ) P=5,6,7,8,9: tax=x*0.03 (5000 10000 ) 这里的P是“情况”值,用产值x除以1000的整数值作为P,如果P>10也归入P=10的情况。Pascal语言用P=trunc(x/1000)取整计算, Var x,p : integer; Tax : real;

case的两种表现方式和使用

CASE语句 在某些方面,CASE语句是几种不同语句的一种等价物,这些语句来自你之前学过的语言。在过程化的编程语言中,下面的语句与CASE的功能相似: Switch: C、C++、C#、Delphi Select Case:Visual Basic Do Case:Xbase Evaluate:COBOL 我可以肯定还有其他语句,它们来自我多年前以这种或那种形式使用的语言。在许多方面,在T-SQL中使用CASE语句的最大缺陷是置换运算符而不是流控制语句。 书写CASE语句的方法不止一种:可以使用输入表达式或布尔表达式。第一种选择是可以使用一个输入表达式,将它与每一个WHEN子句中使用的值进行比较。SQL Server将其视为简单CASE: 1.CASE 2.WHEN THEN 3.[...n] 4.[ELSE ] 5.END 第二种选择是为每个WHEN子句提供一个表达式(计算结果为TRUE/FALSE)。文档将其视为搜索CASE: 1.CASE 2.WHEN THEN 3.[...n] 4.[ELSE ] 5.END 或许CASE最大的好处是可以在SELECT语句里"内联"地(即,作为完整的部分)使用它。这一功能绝对是非常强大的。 1. 简单CASE 简单CASE使用一个可以得到布尔值结果的表达式。下面看一个例子: https://www.sodocs.net/doc/297544061.html,E AdventureWorks2008; 2.GO 3. 4.SELECT TOP 10 SalesOrderID, SalesOrderID % 10 AS 'Last Digit', Position = 5.CASE SalesOrderID % 10 6.WHEN 1 THEN 'First' 7.WHEN 2 THEN 'Second' 8.WHEN 3 THEN 'Third'

switch语句的用法

C语言switch语句的用法详解 C语言还提供了另一种用于多分支选择的switch语句,其一般形式为: switch(表达式){ case 常量表达式1: 语句1; case 常量表达式2: 语句2; … case 常量表达式n: 语句n; default: 语句n+1; } 其语义是:计算表达式的值。并逐个与其后的常量表达式值相比较,当表达式的值与某个常量表达式的值相等时,即执行其后的语句,然后不再进行判断,继续执行后面所有case 后的语句。如表达式的值与所有case后的常量表达式均不相同时,则执行default后的语句。 【例4-9】 1.#include 2.int main(void){ 3.int a; 4.printf("input integer number: "); 5.scanf("%d",&a); 6.switch(a){ 7.case1:printf("Monday\n"); 8.case2:printf("Tuesday\n"); 9.case3:printf("Wednesday\n"); 10.case4:printf("Thursday\n"); 11.case5:printf("Friday\n");

12.case6:printf("Saturday\n"); 13.case7:printf("Sunday\n"); 14.default:printf("error\n"); 15.} 16.return0; 17.} 本程序是要求输入一个数字,输出一个英文单词。但是当输入3之后,却执行了case3以及以后的所有语句,输出了Wednesday 及以后的所有单词。这当然是不希望的。为什么会出现这种情况呢?这恰恰反应了switch语句的一个特点。在switch语句中,“case 常量表达式”只相当于一个语句标号,表达式的值和某标号相等则转向该标号执行,但不能在执行完该标号的语句后自动跳出整个switch 语句,所以出现了继续执行所有后面case 语句的情况。这是与前面介绍的if语句完全不同的,应特别注意。 为了避免上述情况,C语言还提供了一种break语句,专用于跳出switch语句,break 语句只有关键字break,没有参数。在后面还将详细介绍。修改例题的程序,在每一case语句之后增加break 语句,使每一次执行之后均可跳出switch语句,从而避免输出不应有的结果。 【例4-10】 1.#include 2.int main(void){ 3.int a; 4.printf("input integer number: "); 5.scanf("%d",&a); 6.switch(a){ 7.case1:printf("Monday\n");break;

case语句 四选一

module mux4_to_1case (out,i0,i1,i2,i3,s1,s0); output out; input i0,i1,i2,i3,s1,s0; reg out; always@(i0 or i1 or i2 or i3 or s1 or s0) case({s1,s0}) 2'd0:out =i0; 2'd1:out =i1; 2'd2:out =i2; 2'd3:out =i3; endcase endmodule module case4_1; reg IN0,IN1,IN2,IN3; reg S1,S0; wire OUTPUT; mux4_to_1case m1(OUTPUT,IN0,IN1,IN2,IN3,S1,S0); initial begin IN0 =1;IN1 = 0;IN2 = 1;IN3 = 0; #0 $display("IN0=%b,IN1=%b,IN2=%b,IN3=%b\n",IN0,IN1,IN2,IN3); S1=0; S0=0; #1 $display("S1=%b,S0=%b,OUTPUT=%b\n",S1,S0,OUTPUT); S1=0; S0=1; #1 $display("S1=%b,S0=%b,OUTPUT=%b\n",S1,S0,OUTPUT); S1=1; S0=0; #1 $display("S1=%b,S0=%b,OUTPUT=%b\n",S1,S0,OUTPUT); S1=1; S0=1; #1 $display("S1=%b,S0=%b,OUTPUT=%b\n",S1,S0,OUTPUT); end endmodule

do case 语句

6、表RSDA.DBF结构为:姓名(C,6);性别(C,2),年龄(N,2),出生日期(D,8)。判断表中是否有"李明",查询此人的性别及年龄,确定参加运动会的项目。 SET TALK OFF USE RSDA ***********SPACE********** 【?】 FOR 姓名= "李明" ***********SPACE********** IF .NOT. 【?】 DO CASE CASE 性别= "男" ?"请参加爬山比赛" CASE 年龄<=50 ? "请参加投篮比赛" CASE 年龄<=60 ? "请参加老年迪斯科比赛" ***********SPACE********** 【?】 ELSE ? "查无此人" BROWSE ENDIF USE SET TALK ON RETURN 『答案』: 1 LOCATE 或 LOCA 或 LOCATE★ALL 2 EOF() 3 ENDCASE 或 ENDC 13、对表XSDB.DBF中的计算机和英语都大于等于90分以上的学生奖学金进行调整:法律系学生奖学金增加12元、英语系学生奖学金增加15元、中文系学生奖学金增加18元,其他系学生奖学金增加20元。请在【】处添上适当的内容,使程序完整。 USE XSDB ***********SPACE********** 【?】 DO WHILE FOUN() DO CASE CASE 系别="法律" ZJ=12 CASE 系别="英语" ZJ=15 CASE 系别="中文" ZJ=18 ***********SPACE********** 【?】 ZJ=20 ENDCASE

switch语句的用法

if语句处理两个分支,处理多个分支时需使用if-else-if结构,但如果分支较多,则嵌套的if语句层就越多,程序不但庞大而且理解也比较困难.因此,C语言又提供了一个专门用于处理多分支结构的条件选择语句,称为switch语句,又称开关语句.使用switch语句直接处理多个分支(当然包括两个分支).其一般形式为: 引用 switch(表达式) { case 常量表达式1: 语句1; break; case 常量表达式2: 语句2; break; …… case 常量表达式n: 语句n; break; default: 语句n+1; break; } switch语句的执行流程是:首先计算switch后面圆括号中表达式的值,然后用此值依次与各个case的常量表达式比较,若圆括号中表达式的值与某个case后面的常量表达式的值相等,就执行此case后面的语句,执行后遇break语句就退出switch语句;若圆括号中表达式的值与所有case后面的常量表达式都不等,则执行default后面的语句n+1,然后退出switch语句,程序流程转向开关语句的下一个语句.如下程序,可以根据输入的考试成绩的等级,输出百分制分数段: 引用 switch(grade) { case 'A': /*注意,这里是冒号:并不是分号;*/ printf("85-100\n");

break; /*每一个case语句后都要跟一个break用来退出switch 语句*/ case 'B': /*每一个case后的常量表达式必须是不同的值以保证 分支的唯一性*/ printf("70-84\n"); break; case 'C': printf("60-69\n"); break; case 'D': printf("<60\n"); break; default: printf("error!\n"); } (2) 如果在case后面包含多条执行语句时,也不需要像if语句那样加大括号,进入某个case后,会自动顺序执行本case后面的所有执行语句.如: 引用 { case 'A': if(grade<=100) printf("85-100\n"); else printf("error\n"); break; …… (3) default总是放在最后,这时default后不需要break语句.并且,default 部分也不是必须的,如果没有这一部分,当switch后面圆括号中表达式的值与所有case后面的常量表达式的值都不相等时,则不执行任何一个分支直接退出switch语句.此时,switch语句相当于一个空语句.例如,将上面例子中switch 语句中的default部分去掉,则当输入的字符不是"A","B","C"或"D"时,此switch语句中的任何一条语句也不被执行. (4) 在switch-case语句中,多个case可以共用一条执行语句,如:

SQL中Case语句用法

SQL中Case语句用法讨论 Case具有两种格式。简单Case函数和Case搜索函数。 --简单Case函数 CASE sex WHEN'1'THEN'男' WHEN'2'THEN'女' ELSE'其他'END --Case搜索函数 CASE WHEN sex = '1'THEN'男' WHEN sex = '2'THEN'女' ELSE'其他'END 这两种方式,可以实现相同的功能。简单Case函数的写法相对比较简洁,但是和Case搜索函数相比,功能方面会有些限制,比如写判断式。 还有一个需要注意的问题,Case函数只返回第一个符合条件的值,剩下的Case 部分将会被自动忽略。 --比如说,下面这段SQL,你永远无法得到“第二类”这个结果 CASE WHEN col_1 IN ( 'a', 'b') THEN'第一类' WHEN col_1 IN ('a') THEN'第二类' ELSE'其他'END 下面我们来看一下,使用Case函数都能做些什么事情。 一,已知数据按照另外一种方式进行分组,分析。 有如下数据:(为了看得更清楚,我并没有使用国家代码,而是直接用国家名作为Primary Key) 国家(country) 人口(population) 中国600 美国100 加拿大100 英国200 法国300 日本250 德国200 墨西哥50 印度250 根据这个国家人口数据,统计亚洲和北美洲的人口数量。应该得到下面这个结果。 洲人口

亚洲1100 北美洲250 其他700 想要解决这个问题,你会怎么做?生成一个带有洲Code的View,是一个解决方法,但是这样很难动态的改变统计的方式。 如果使用Case函数,SQL代码如下: SELECT SUM(population), CASE country WHEN'中国'THEN'亚洲' WHEN'印度'THEN'亚洲' WHEN'日本'THEN'亚洲' WHEN'美国'THEN'北美洲' WHEN'加拿大'THEN'北美洲' WHEN'墨西哥'THEN'北美洲' ELSE'其他'END FROM Table_A GROUP BY CASE country WHEN'中国'THEN'亚洲' WHEN'印度'THEN'亚洲' WHEN'日本'THEN'亚洲' WHEN'美国'THEN'北美洲' WHEN'加拿大'THEN'北美洲' WHEN'墨西哥'THEN'北美洲' ELSE'其他'END; 同样的,我们也可以用这个方法来判断工资的等级,并统计每一等级的人数。SQL 代码如下; SELECT CASE WHEN salary <= 500 THEN'1' WHEN salary > 500 AND salary <= 600 THEN'2' WHEN salary > 600 AND salary <= 800 THEN'3' WHEN salary > 800 AND salary <= 1000 THEN'4' ELSE NULL END salary_class, COUNT(*) FROM Table_A GROUP BY CASE WHEN salary <= 500 THEN'1' WHEN salary > 500 AND salary <= 600 THEN'2' WHEN salary > 600 AND salary <= 800 THEN'3' WHEN salary > 800 AND salary <= 1000 THEN'4'

IF和CASE语句的区别

区别:IF语句和CASE语句相比,case语句的可读性较好,它把条件中所有可能出现的情况全部列出来了,可执行条件一目了然。而且CASE语句的执行过程不像IF语句那样又一个逐项条件顺序比较的过程。CASE语句中条件句的次序是不重要的,它的执行过程更接近于并行方式。一般情况下,对相同的逻辑功能综合后,用CASE语句描述的电路比用IF语法描述的电路好用更多的硬件资源。不但如此,对于某些逻辑功能,用CASE语句将无语描述,只能用IF语句来描述。因为IF-THEN-ELSIF语句具有条件相与的功能和自动将逻辑值“-”包括进去的功能(逻辑值“-”有利于逻辑的化简);而CASE语句只有条件相或的功能。 IF语句中至少应有一个条件句,条件句必须有BOOLEAN表达式构成。 IF条件句THEN ——第一种IF语句,用于门阀控制(判断IF后条件句是否为真,为真则执行顺序语句,直到“END IF”完成全部IF语句执行。为伪则跳过顺序语句,直接结束IF语句的执行。) 顺序语句; END IF; IF条件句THEN ——第二种IF语句,用于二选一控制(当所测条件为FALSE,并不直接结束条件语句的执行,而是转向ELSE以下的另一段顺序语句继续执行。具有条件分支的功能,通过测定所设条件的真伪已决定执行哪一组顺序语句,在执行玩其中一组语句后,再结束IF语句。) 顺序语句; ELSE 顺序语句; END IF; IF 条件句THEN ——第三种IF语句,用于多选择控制(通过关键词ELSIF设定多个判定条件,从而是顺序语句的执行分支可以超过两个。) 顺序语句; ELSE 条件句THEN 顺序语句; … ELSE 顺序语句; END IF; IF语句中至少应有一个条件句,条件句必须有BOOLEAN表达式构成。 CASE语句以一个多值表达式为条件式,根据条件式的不同取值选择多项顺序语句中的一项执行,实现多路分支,故适用于两路或多路分支判断结构。

Oracle CASE条件语句

Oracle CASE条件语句 从Oracle9i后,在PL/SQL中也可以像其他的编程语言一样使用CASE语句,CASE语句的执行方式与IF语句相似。通常情况下,CASE语句从关键字CASE开始,后面跟着一个选择器,它通常是一个变量。接下来是WHEN子句,它将根据选择器的值执行不同的PL/SQL语句。 CASE语句共有两种形式。第一种形式是获取一个选择器值,然后将其与每个WHEN 子句进行比较。其语法形式如下: case when then pl/sql_statement1; when then pl/sql_statement2; …… when < expressionN> then pl/sql_statement n; [ else pl/sql_statement n+1;] end; 另一种形式是不使用选择器,而是判断每个WHEN子句中的条件。这种CASE语句的语法结构如下: case when expression 1 then pl/sql_statement1; when expression 2 then pl/sql_statement2; …… when expression N then pl/sql_statement n; [ else pl/sql_statement n+1;] end; 虽然CASE语句的作用与IF..ELSIF..ELSE..END IF语句相同,都可以实现多项选择,但是CASE语句可以以一种更简洁的表示法实现该功能。当执行CASE语句时,系统将根据选择器的值查找与此相匹配的WHEN常量,当找到一个匹配的WHEN常量时,就会执行与该WHEN常量相关的子句。如果没有与选择器相匹配的WHEN常量,那么就执行ELSE子句。 例如,下面的示例演示了CASE语句的使用: SQL> set serveroutput on SQL> declare 2 i number:=0; 3 begin 4 while i< 5 loop 5 case i 6 when 0 then 7 dbms_output.put_line('i is zero'); 8 when 1 then 9 dbms_output.put_line('i is one'); 10 when 2 then 11 dbms_output.put_line('i is two');

Do case 语句

Do case 语句 语法格式 Do case Case 条件1 <语句序列1> Case 条件2 <语句序列2> …… Case 条件n <语句序列n> [otherwise <语句序列n+1>] Endcase 后续语句 注意:1. 2. 3. 循环结构 语法格式 Do while <条件> <语句序列> Enddo 后续语句

1+2+3+…+100 引进s , i s=0 i=1 s=s+i i=i+1 Loop 语句返回 Exit 语句退出 Do while <条件> <语句1> <语句2> exit <语句3> <语句4> Enddo 后续语句 For语句 For <循环变量>=<初值> to <终值> [step <步长>] 循环体 Endfor 后续语句 注意:1. 2.步长为1 为正也可负 3. 循环次数=(终值-初值)/步长+1

水仙花数 100---999 153= 1^3+5^3+3^3 110 针对表循环 全局变量public x 本模块上级模块下级模块私有变量private y 本模块下级模块 局部变量local z 本模块 3.1数据库 一、建立数据库 1.菜单 2.项目管理器 3.命令create database 数据库名称 二、使用数据库 1.菜单 2.项目管理器 3.命令open database 数据库名称 三、修改数据库 Modify database 数据库名称

四、设置当前数据库 1.数据库下拉列表框 2.命令set database to 数据库名称 五、删除数据库 1.项目管理器 2.命令delete database 数据库名称先关闭close database 3.2建立数据库表 一、建立表 1.菜单 2.项目管理器 3.命令create 二、修改表 1.菜单 2.项目管理器 3.命令modify structure 3.3表的基本操作 一、浏览记录browse 二、增加记录 1.append 2.insert

case的用法

CASE语句例子解释: 一、简单case 表达式 case测试表达式 when简单表达式1 then结果表达式1 when简单表达式2 then结果表达式2 when简单表达式3 then结果表达式3 else结果表达式n end 说明:测试表达式可以是一个常数、字段名、函数或子查询,各个简单表达式中不包含比较运算符,它们给出被比较的表达式或值,其数据类型必须与测试表达式的数据类型相同,或者可以自动转换为测试表达式的数据类型。 CASE表达式的执行过程为: ①计算测试表达式,然后按指定顺序对每个WHEN子句的简单表达式 进行计算。 ②如果某个简单表达式与测试表达式相匹配,则返回与第一个取值 为TRUE的WHEN相对应的结果表达式的值。 ③如果所有简单表达式都不与测试表达式相匹配,则当指定ELSE子 句时,将返回ELSE中指定的结果表达式的值,若没有指定ELSE子句,则返回NULL值。 例: ㈠、declare@a int,@answer char(10)

set@answer=10 set@answer=case@a when 1 then'A' when 2 then'B' when 3 then'C' when 4 then'D' when 5 then'E' ELSE'others' end print'is'+@answer 结果:is others ㈡、declare@a int,@answer char(10) set@answer=10 set@a=5 set@answer=case@a when 1 then'A' when 2 then'B' when 3 then'C' when 4 then'D' when 5 then'E' ELSE'others' end print'is'+@answer 则结果为:isE 二、搜索case 表达式 case when布尔表达式1 then结果表达式1 when布尔表达式2 then结果表达式2

case语句实例

sql语句判断方法之一 Case具有两种格式。简单Case函数和Case搜索函数。 --简单Case函数 CASE sex WHEN '1' THEN '男' WHEN '2' THEN '女' ELSE '其他' END --Case搜索函数 CASE WHEN sex = '1' THEN '男' WHEN sex = '2' THEN '女' ELSE '其他' END 这两种方式,可以实现相同的功能。简单Case函数的写法相对比较简洁,但是和Case搜索函数相比,功能方面会有些限制,比如写判断式。 还有一个需要注意的问题,Case函数只返回第一个符合条件的值,剩下的Case 部分将会被自动忽略。 例子: 有一张表,里面有3个字段:语文,数学,英语。其中有3条记录分别表示语文70分,数学80分,英语58分,请用一条sql语句查询出这三条记录并按以下条件显示出来(并写出您的思路): 大于或等于80表示优秀,大于或等于60表示及格,小于60分表示不及格。显示格式: 语文数学英语 及格优秀不及格 ------------------------------------------ select (case when 语文>=80 then '优秀' when 语文>=60 then '及格' else '不及格') as 语文,

(case when 数学>=80 then '优秀' when 数学>=60 then '及格' else '不及格') as 数学, (case when 英语>=80 then '优秀' when 英语>=60 then '及格' else '不及格') as 英语, from table CASE 可能是SQL 中被误用最多的关键字之一。虽然你可能以前用过这个关键字来创建字段,但是它还具有更多用法。例如,你可以在WHERE 子句中使用CASE。 首先让我们看一下CASE 的语法。在一般的SELECT 中,其语法如下:SELECT = CASE WHEN THEN WHEN THEN ELSE END 在上面的代码中需要用具体的参数代替尖括号中的内容。下面是一个简单的例子: USE pubs GO SELECT Title, 'Price Range' = CASE WHEN price IS NULL THEN 'Unpriced' WHEN price < 10 THEN 'Bargain'

switch-case语句用法

switch-case语句用法 2007-12-25 08:11 if语句处理两个分支,处理多个分支时需使用if-else-if结构,但如果分支较多,则嵌套的if 语句层就越多,程序不但庞大而且理解也比较困难.因此,C语言又提供了一个专门用于处理多分支结构的条件选择语句,称为switch语句,又称开关语句.使用switch语句直接处理多个分支(当然包括两个分支).其一般形式为: 引用 -------------------------------------------------------------------------------- switch(表达式) { case 常量表达式1: 语句1; break; case 常量表达式2: 语句2; break; …… case 常量表达式n: 语句n; break; default: 语句n+1; break; } -------------------------------------------------------------------------------- switch语句的执行流程是:首先计算switch后面圆括号中表达式的值,然后用此值依次与各个case的常量表达式比较,若圆括号中表达式的值与某个case后面的常量表达式的值相等,就执行此case后面的语句,执行后遇break语句就退出switch语句;若圆括号中表达式的值与所有case后面的常量表达式都不等,则执行default后面的语句n+1,然后退出switch语句,程序流程转向开关语句的下一个语句.如下程序,可以根据输入的考试成绩的等级,输出百分制分数段: 引用 -------------------------------------------------------------------------------- switch(grade) { case 'A': /*注意,这里是冒号:并不是分号;*/

如何使用switch case语句switch case语句用法详解_华清远见

如何使用switch case语句?switch case语句用法详解 华清远见的java培训导师为大家分享:如何使用switch case语句,以及switch case语句用法详解。 首先在使用switch case之前,我们需要了解一些注意事项: switch(A),括号中A的取值只能是整型或者可以转换为整型的数值类型,比如byte、short、int、char、还有枚举;需要强调的是:long和String类型是不能作用在switch语句上的。 case B:C;case是常量表达式,也就是说B的取值只能是常量(需要定义一个final型的常量,后面会详细介绍原因)或者int、byte、short、char(比如1、2、3、200000000000(注意了这是整型)),如果你需要在此处写一个表达式或者变量,那么就要加上单引号; case后的语句可以不用大括号,就是C不需要用大括号包裹着; default就是如果没有符合的case就执行它,default并不是必须的. 现在我们开始了解switch case,一般形式: switch(表达式){ case 常量表达式1: 语句1; case 常量表达式2: 语句2; … case 常量表达式n: 语句n; default: 语句n+1; } 意思是先计算表达式的值,再逐个和case 后的常量表达式比较,若不等则继续往下比较,若一直不等,则执行default后的语句;若等于某一个常量表达式,则从这个表达式后的语句开始执行,并执行后面所有case后的语句。 与if语句的不同:If语句中若判断为真则只执行这个判断后的语句,执行完就跳出if语句,不会执行其他if语句; 而switch语句不会在执行判断为真后的语句之后跳出循环,而是继续执行后面所有case语句。在每一case 语句之后增加break 语句,使每一次执行之后均可跳出switch语句,从而避免输出不应有的结果。 int a; printf("input integer number: ");

Select Case语句

Select Case语句 【教学目的】 1、知识与技能: (1)掌握Select Case语句的格式、功能和执行过程。 (2)学会使用Select Case语句解决实际问题。 2、情感态度: (1)在自主探究解决问题的过程中,让学生体验学习的乐趣。 (2)培养学生的逻辑思维能力,提高学生探究学习的能力。 【教学思想】 通过引例,让学生自己观察、思考Select Case 语句与If语句的不同,并体会Select Case的方便之处,通过实例练习使学生掌握Select Case语句的使用方法,达到教学目的。 【教学分析】 1、教学内容: 理解并掌握Select Case语句的语法格式、执行过程及其功能,理解Select Case解决多重选择问题上的直观、优越性,并能设计程序解决生活中的实际问题。 2、重点、难点 重点: (1)Select Case语句格式及执行过程。 (2)理解Select Case解决多重选择问题上的直观、优越性。 难点: 多重选择语句中的表达式与表达式列表(可以通过学生相互讨论及教师的总结,比较它们与If语句中的关系表达式有何不同,从中理解select case 语句中的表达式与表达式列表) 【教学方法和策略】 (1)启发式教学:在教学过程中,把讲解和提问相结合,把学生作为教学过程中的主体,引导学生主动思考,调动学生学习积极性和主动 性,培养学生深入钻研、探究的学习习惯。 (2)多媒体教学:充分运用了PPT技术,精心设计制作了动画效果,把抽象、难懂的计算机程序内容用生动、丰富的形式表现出来,更大

地激发学生学习兴趣,提高教学效果。【教学安排】

第五课if嵌套与case语句

第五课if嵌套与case语句 一、IF语句的嵌套 在if语句中,如果then子句或else子句仍是一个if语句,则称为if语句的嵌套。 例1计算下列函数 分析:根据输入的x值,先分成x>0与x≤0两种情况,然后对于情况x≤0,再区分x是小于0,还是等于0。 源程序如下: program ex; var x:real; y:integer; begin wrtie('Input x:');readln(x); if x>0 then y:=1{x>0时,y的值为1} else {x≤0时} if x=0 then y:=0 else y:=-1; writeln('x=',x:6:2,'y=',y); end.

显然,以上的程序中,在then子句中嵌套了一个Ⅱ型if语句。当然程序也可以写成如下形式: program ex; var x:real;y:integer; begin wrtie('Input x:');readln(x); if x>=0 then if x>0 then y:=1 else y:=0 else y=-1; writeln('x=',x:6:2,'y=',y); end. 但是对于本题,下面的程序是不对的。 y:=0; if x>=0 then if x>0 then y:=1 else y:=-1; 明显,从此人的程序书写格式可以看出,他想让else与第一个if配对,而事实上,这是错的。因为pascal规定:else与它上面的距它最近的then配对,因此以上程序段的逻辑意义就与题义不符。 要使上程序段中esle与第一个then配对,应将程序段修改为: y:=0; 或者 y:=0; if x>=0 if x>=0 then if x>0 then then y:=1 begin else if x>0 then Y:=1;

switch语句用法汇总(笔试必备)

Java switch case语句整理总结 前言:学会以下的几种用法,java笔试有关switch就都没问题了 switch(表达式) { case 常量表达式1: //如果常量表达式是1 ,可看做if(某变量==1) 语句1; break; //跳出switch需要认真理解 .... case 常量表达式2: //看做else if 语句2; break; default:语句; //看做else ,即都没符合 } 1、switch-case语句完全可以与if-else语句互转,但通常来说,switch-case语句执行效率要高。下面会举例解释。 2、default就是如果没有符合的case就执行它,default并不是必须的. 3、case后的语句可以不用大括号. 4、switch语句的判断条件可以接受int,byte,char,short,不能接受其他类型. 或者是final型的变量。 但是final型的变量也是有要求的,也即是它必须是编译时的常量,怎么讲呢,看下面的程序段: final int a = 0; final int b; 第二个语句就是在编译时不能够被识别出值的变量,因为它没有初始化,当然,这条语句也是错误的。 所以总结case后的值可以是常数值或final型的值。 5、一旦case匹配,就会顺序执行后面的程序代码,而不管后面的case是否匹配,直到遇见break,利用这一特性可以让好几个case执行统一语句. 原理归原理,下面是几个容易混淆的例子. 1.标准型(case后面都有break语句) int i=3; switch(i) { case 1: //相当于if(i==1) System.out.println(1); break; //跳出switch case 2: System.out.println(2); break; case 3: System.out.println(3); break; default: System.out.println("default"); break; } 输出结果: 3 2.特殊型1(不是完全有break语句,可以完成一些特殊应用) 例子:求2013 某月的天数month为月份

相关主题