搜档网
当前位置:搜档网 › 云计算中基于遗传算法的时间有效工作流调度(IJITCS-V10-N1-8)

云计算中基于遗传算法的时间有效工作流调度(IJITCS-V10-N1-8)

云计算中基于遗传算法的时间有效工作流调度(IJITCS-V10-N1-8)
云计算中基于遗传算法的时间有效工作流调度(IJITCS-V10-N1-8)

I.J. Information Technology and Computer Science, 2018, 1, 68-75

Published Online January 2018 in MECS (https://www.sodocs.net/doc/df12970294.html,/)

DOI: 10.5815/ijitcs.2018.01.08

Time Effective Workflow Scheduling using

Genetic Algorithm in Cloud Computing

Rohit Nagar

Dr. B.R Ambedkar National Institute of Technology, Department of CSE, Jalandhar, 144011, India

E-mail: rohitnagarn92@https://www.sodocs.net/doc/df12970294.html,

Deepak K. Gupta and Raj M. Singh

Dr. B.R Ambedkar National Institute of Technology, Department of CSE, Jalandhar, 144011, India

E-mail: guptadk@nitj.ac.in, rm_singh14@https://www.sodocs.net/doc/df12970294.html,

Received: 26 August 2017; Accepted: 07 November 2017; Published: 08 January 2018

Abstract—Cloud computing is service based technology on internet which facilitates users to access plenty of resources on demand from anywhere and anytime in a metered manner i.e. pay per usage without paying much heed to the maintenance and implementation details of application. As cloud technology is evolving day by day it is being confronted by numerous challenges, such as time and cost under deadline constraints. Research work done so far mainly focused on reducing cost as well as execution time. In order to minimize cost and execution time previously existing workflow scheduling model known as predict earliest finish time is used. In this research work we have proposed a new PEFT genetic algorithm approach to further reduce the execution time on this model. A strategy is developed to let GA focus on to optimize chromosomes objective to get best suitable mutated children. After obtaining a feasible solution, the genetic algorithm focuses on optimizing the execution time. Experimental results show that our algorithm can find better solution within lesser time.

Index Terms—Cloud computing, Task Scheduling, Earliest finish time, Genetic Algorithm, Makespan.

I.I NTRODUCTION

Over the past few years, cloud computing has become a trending research topic for scientific research. Cloud Computing ensures reliable, scalable, pay-per-use, customized and dynamic computing environments for the end-users. Cloud computing provides many facilities for computing services, centralized servers, on-demand self-service, huge storage, databases, broad network access, rapid elasticity software over the Internet [1]. Cloud computing is nothing but the way of using a network of remotely located servers hosted all over the internet for storing, processing data and managing data, instead of using a local server or a personal computer. The companies which offers such computing services are known as cloud providers. They may charge for their cloud computing services based on usage, it is very much similar to how you pay bills for water consumption and electricity consumption at your home [2].

The cloud computing services are classified in three ways named as Software as a service (SaaS), Platform as a service (PaaS), and Infrastructure as a service (IaaS) [3]. SaaS applications are deployed over the internet for the clients in a single instance multi-tenant model and are accessed by various devices having internet capability through web browser or program interface [4]. It is one of the fastest growing services in cloud. PaaS is a development tool which provides a collaborative platform that consists of database system, operating system, programming stacks and hardware for creating business applications easily and quickly without much cost. IaaS is a way to provide computing infrastructural resources (VMs) instead of purchasing them online in multi- tenant fashion on pay per usage basis.

The problem of mapping task to their resources belongs to class of NP problem. There is no known algorithm exists which can generate the optimal solution with in feasible time period. Solutions based on the exhausted search are practically not possible. Overhead of generating scheduling is very high. PEFT algorithm is the improved version of HEFT algorithm. PEFT algorithm gives the best suitable schedule with less makespan time and less communication cost.

In this article, we discussed about scheduling in cloud computing environment. The introduction is summarized into Section 1. Related work is shown in Section 2. Problem is formulated in Section 3. The present work is explained in Section 4. Experimental analysis is included in Section 5 and Section 6 sums up the paper.

II.R ELATED W ORK

For better understanding of workflow scheduling we went through several research papers. The researchers proposed many algorithms but none of them is best. Because there are various parameters which are considered to make the algorithm best among all. In 2015, T. Bridi et al presented a constraint programming technique based scheduler [5] which improves the results obtained from commercial schedulers. It was

implemented to make it usable on real life high performance computing setting. The scheduler works well in both simulated and real HPC environment. This scheduling algorithm ensures robustness, flexibility and scalability [5].

In 2016, J. meena et al proposed a meta –heuristic cost effective genetic algorithm (CEGA) [6]that reduces the execution cost of the workflow while meeting the deadline in cloud computing. It also covers some big issues such as performance variation and booting time of virtual machines. The simulation experiments conducted on four scientific workflows (Montage, LIGO, CyberShake, Epigenomics) and exhibited better performance than current state of art algorithms. The proposed CEGA algorithm shows the highest hit rate for deadline constraint.

In 2015, A.verma et al used Bi-criteria Priority based Particle Swarm Optimization (BPSO) [7].He proposed scheduling algorithm for workflow tasks over the cloud processors under deadline and budget constraints. To each workflow’s task a priority is given using bottom level technique. It gave reduced execution cost of schedule as compared to state of art algorithm under the same deadline and budget constraint while considering the load on resources too.

In 2014, Arabnejad et al. introduced a list scheduling algorithm[8]having less time complexity than HEFT algorithm. Authors proposed a list based scheduling technique named PEFT for heterogeneous distributed computing which gives better results than HEFT in terms of makespan. It has the same time complexity as that of HEFT. It consists of two phases, task prioritizing phase and processor selection phase. It can be assumed as improved version of HEFT algorithm. This algorithm uses a matrix called Optimistic Cost Table (OCT). OCT indicates the minimum time required for processing all the tasks which lies on the longest path from the current task to the end task. In task prioritization, task priority is calculated by cumulative OCT. Optimistic EFT is calculated to assign a processor for a task.

In 2002, H. Topcuoglu et al. presented an algorithm [9] called HEFT which provides solution for scheduling problem in DAG on heterogeneous systems. Working of HEFT algorithm takes place in two phases: task prioritizing phase and processor selection phase. In processor selection phase it minimizes the earliest finish time of the child task of each and every selected task. They has proposed two methods for scheduling a workflow task in heterogeneous environment named as HEFT and Critical path on a Processor. They work on the same line with slight differences. The latter uses the critical path and they allocate tasks on the critical path to the processor which will give minimum EFT. HEFT is better than other algorithms in same domain because of its high efficiency in terms of makespan and robust nature.

In 2011, Daoud et al. used the LDCP list based heuristic to generate the initial population [10].Longest Dynamic critical path is a list based tri-phase heuristic. H2GS combines LDCP and GA. It uses the high quality schedule generated by LDCP as a seed for the initial population which is exploited by the customized genetic algorithm. The schedule generated by the LDCP is near an optimal schedule and when such a schedule is given as an input to the genetic algorithm, the algorithm will converge faster. It uses two-dimensional (2-D) chromosomes for representation and customized operators for searching the problem space. It has shown significant improvement in terms of speedup and normalized schedule length, over HEFT and Dynamic Level Scheduling in heterogeneous distributed system. In 2012, Kaur et al. proposed a new Modified Genetic Algorithm for scheduling the tasks in private cloud for minimizing the makespan and cost [11].In MGA, initial population is generating using SCFP (Smallest cloudlet to Fastest Processor), LCFP (Longest cloudlet to Fastest Processor) and 8 random schedules. Two-point crossover and simple swap are used. This gives the good performance under the heavy loads.

In 2012, Ahmad et al. proposed an effective genetic algorithm called PEGA [12], which is capable of providing the optimal results in large space with less time complexity. The direct chromosome representation is used having two parts. The right half is made by the b-level (upward rank) which give the better results in terms schedule length when compared with randomly generated population. Two fold cross over is used in which single and two-point crossover are executed one after the other in order to enhance the quality and the convergence speed of the solution.

In 2014 Shekar singh and Mala Kalra et al. proposed a genetic algorithm based approach in which initial population is generated with advance version of Max-Min by which we can get more optimized results in terms of makespan. Authors proposed Modified Genetic Algorithm. When experiments were conducted on various data sets, MGA exhibited better performance. Since scheduling of tasks is the key issue in cloud computing author used GA in the research work. In standard genetic algorithm initial population is randomly generated which doesn’t produce efficient results. Hence author modified genetic algorithm. Here initial population is generated using Enhanced Max Min algorithm and then this population is given to GA to further optimize. Results show that MGA performs better than standard genetic algorithm [13].

In 2012 Saima Gulzar, Ahmad, Ehsan Ullah Munir et al. proposed an effective genetic algorithm called PEGA, which is capable of providing the optimal results in large space with less time complexity. The direct chromosome representation is used having two parts. The right half is made by the b-level (upward rank) which give the better results in terms schedule length when compared with randomly generated population. Two fold cross over is used in which single and two-point crossover are executed one after the other in order to enhance the quality and the convergence speed of the solution. The author has concluded that the PEGA provides the better schedule with smaller makespan and low time complexity [14]

In 2012 Chuan Wang, Jianhua Gu Yunlan Wang et al. presented a hybrid approach which uses a combination of successor concerned list based heuristic and a genetic algorithm. First phase is the seeding method for GA, to generate the initial population by the schedule given by the SCLS (Successor concerned list heuristic). In SCLS, Priority list of the task was formed using the upward rank. In the second phase the good quality schedule generated by the above phase is fed into the genetic algorithm. The authors had proved that HSCGS give better results than HEFT and DLS (Dynamic Level Scheduling) [15]

In 2013 Saeid Abrishami, Mahmoud Naghibzadeh et al. proposed two algorithms for workflow scheduling based on the Partial Critical path to find the optimal solution in terms of minimal cost subject to the defined deadline constraints. IC-PCP (Iaas cloud partial critical path) tries to schedule the tasks on partial critical path by allocating them to the available instances of the service before its latest finish time. ICPCP2 (Iaas cloud partial critical path with deadline distribution) uses the new method for path assigning policy and planning is done such that the remaining time of the available instance is used first to execute the task before its sub-deadline, rather than starting a new instance of the service. [16]

In 2014 A. Verma and Sakshi Kaushal et al. proposed three hybrid genetic algorithms that uses the schedule generated by the bottom level and top level as an initial population to minimize the execution cost of the schedule while following the deadline constraint. BGA (Bottom-level GA) uses the bottom level in descending order to assign the priorities, while TGA (Top-level GA) consider the top-level in increasing order. BTGA (Bottom level and top level) which uses both level has a better performance than the other two [17].

In 2012 Beibei Zhu, Hongze Qiu et al. proposed a modified genetic algorithm by improving genetic operators. Experimental studies show that the modified genetic algorithm gives optimal solution. In this paper, author presented an efficient genetic algorithm for DAG scheduling in grid system. By proposing new fitness function and applying new genetic operators, the new proposed GA can obtain an optimal solution [18].

In 2010 S. Selvarani and G. Sudha Sadhasivam et al. proposed an algorithm based on costs with user task grouping. Proposed method uses an improved cost-based scheduling algorithm for efficient mapping of tasks to the resources in cloud. This scheduling algorithm measures both resource cost and computation performance. It improves the computation /communication ratio by grouping the user’s tasks according to a particular cloud resource’s processing capability and sends the grouped jobs to the resources [19].

The conclusion of above research and analysis is that there is no exact algorithm can be proposed because when parameters get changed the algorithm is also has to be changed.

III.P ROBLEM F ORMULATION

Nowadays large no. of business applications are implemented by workflows. Workflows are denoted in terms of a Directed Acyclic Graph (DAG), G= (T, E), where T is the set of tasks and E is the set of edges between the tasks. We cannot initialize any task until all of its predecessor tasks have completed. Workflow scheduling is nothing but a mapping of every task of workflow onto a best suitable resource while meeting the user’s requirements considering task dependencies. Cloud is a platform for workflow execution since it has advantages like scalability, durability, on demand self-service, broad network access and pay per use model. Since it has large number of tasks and virtual machines, workflow scheduling is one of the major issues in cloud computing. We defined our problem here as: Mapping of tasks of a workflow to available resources (VMs) in cloud computing environment to minimize execution time while considering deadlines that is.

Minimize ET

Considering ET ≤D

where ET is the execution time (makespan) and D is the deadlines of the tasks of given workflow.

IV.P RESENT W ORK

To implement our research work we need Intel core i5 machine with 1 TB HDD and 4 GB RAM on Windows 10 OS, NetBeans with Java and workflowsim simulator toolkit. Here output of PEFT algorithm for a DAG is given as input into GA as initial population. Our proposed algorithm is a combination of PEFT algorithm and GA for workflow scheduling in cloud computing environment. The algorithm reduces the execution time (makespan) while maintaining the deadline constraint. Following are our objectives:

1.To study the existing task scheduling approaches

for heterogeneous system.

2.To propose an algorithm for scheduling workflows

in cloud environment aiming to minimize

execution time.

3.To evaluate the proposed solution by comparing it

with the existing workflow scheduling approaches. The proposed work is divided into two steps:

1.Generating a high quality seed for inputting to GA

using PEFT algorithm.

2.Obtaining an optimized schedule by GA in such a

way that it will give minimal execution time in

milliseconds and will finish execution of workflow

before the deadline.

Steps of the Proposed Methodology:

Fig.1. Flow chart of working model of proposed scheme.

Step 1: Generate a best suitable seed using the PEFT algorithm

Step 1.1 Compute the OCT table. The OCT table is nothing but a matrix in which rows represent tasks and columns represent virtual machines. The OCT value is calculated by the equation given below. Algorithm is applied recursively in backward direction. And hence we obtain the cost for executing all the children tasks of a current task until it reaches the end task.

OCT (t i , p k ) = max t j ∈succ (t i ) [min p w ∈ P {OCT (t j ,p w )

+ w (t j , p w ) + c i j )}]

where c i j =0 if p w = p k (1)

Step 1.2: Calculate OCT for every node. OCT defines the rank of every node or task (rank oct ) as in Eq 3. Tasks are arranged in the list on the basis of descending order of the rank OCT .

Rank oct (2)

Step 1.3: Earliest Finish Time (EFT) is calculated using the given equation to allocate a task for the resource (processor).

O EFT (t i , p j ) = EFT (t i , p j ) + OCT (t i , p j ) (3)

Step 1.4: Task is assigned to the processor (VM) which gives minimum OEFT.

Step 1.5: Repeat steps 1.4 and 1.5 if the list is not empty, otherwise return the best schedule in terms of makespan. Step 2: If the termination condition is met than return the solution otherwise repeat steps 3 to step 6.

Step 3: And hence a best suitable schedule is generated and is given to the genetic algorithm as input. The chromosomes (individuals among the population) are encoded using direct representation. The quality of all the feasible solutions is checked by the fitness function. The fitness function will ensure that the solution has the minimum cost and is completed within the deadline. Step 4: Select the chromosomes for applying genetic

Step 1: Apply the PEFT heuristic to generate the

initial population.

Step 2: Is termination condition met?

Step 3: Calculate fitness value of each individual.

Yes

Step 4: Select the individuals for performing the

genetic operations.

Step 5: Apply the crossover and mutation on

selected chromosomes to create offsprings.

Step 6: Validate the new offspring by fitness

function and add the valid one into new population.

Return the optimal solution.

No

Start

operations by using the binary tournament selection technique.

Step 5: On the selected chromosomes apply the crossover and mutation genetic operators to produce the new children (generation).

Step 6: Validate the new children by the fitness function and add the good quality off-springs (valid) into the new population.

Working of Proposed Scheme using Genetic Algorithm 1. P ← initialize population() by PEFT // P= population

2. W← 0 // W = New population

3. PF← Evaluate Fitness such that Execution Time should be less than Deadline

4. Choose two chromosomes with minimum makespan (min1 & min2)

5: Parents← Selection from (PF) on the basis of makespan

6: Offspring ← Crossover ( PFc, Parents)

//PFc=crossover probability

7: Offspring ← Mutation( PFm, Offspring)

//PFm= mutation probability

8: Evaluate fitness (Offspring)

9: Repeat steps 5 to 8 for remaining chromosomes in the population PF and obtain offsprings

10: Insert (Offsprings, W) End for

11:T← PF ∪W//merge new Offsprings with population

12: Ranking of offspring on the basis of deadline (T) 13:P←select best individuals on the basis of minimum makespan time of (T)

14: End while

15: Return P which contains single best schedule.

?Initial Population: Initial population comes in first step, the initial population is initialized using

the PEFT algorithm. The generated strings are

known as chromosomes (P).

?Evaluate Fitness of each Individual: For an obtained solution we should be able to evaluate its

quality which can be done by using fitness

function. Fitness function is described using

deadline concepts. The chromosomes obtained by

PEFT algorithm are tested as per their deadlines.

The chromosomes which meet deadline are added

in to next generation population (PF).

?Selection: Selection plays a major role in improving the performance of any approach by

selecting high quality chromosomes for the next

operations. From the population (PF) the

chromosomes which meets deadlines with

minimum makespan two chromosomes are

selected as parents.

?Crossover:The role of exchanging one part of other chromosomes in such a manner that the GA

generates new chromosomes from previous

generation (PF) by interchanging part of them.

Crossover is done on the selected chromosomes to

obtain crossover children.

?Mutation: The main purpose of mutation is to introduce a new chromosome that doesn’t exist in

existing population. After getting crossover

mutated children are obtained.

?Evaluate fitness of each offspring’s: The mutated children are added in the population W

where offspring are evaluated as per fitness

function to get their fitness. If the offspring’s

fitness is less in comparison of chromosomes with

greatest fitness in the population PF, it will replace

them with the nodes with the greatest fitness.

At last, the new offspring will be added in new generation (T). This process can repeat until we find a good result. In proposed algorithm we are repeating this process 5 times. Hence all best solutions are stored in an array W and minimum makespan chromosome is selected as final solution and tasks are allocated as per the best solution.

V.E XPERIMENTAL A NALYSIS

To evaluate the performance of our proposed algorithm PEFTGA, we are performing the simulations using WorkflowSim. We consider few scientific workflows from different domains: Montage, Inspiral and Peft paper. These workflows have different structural properties and different data and computational requirements. Genetic algorithm is taken as the baseline algorithm.

Our PEFTGA algorithm is tested by:

1.Varying the number of tasks in datasets

2.Varying the inter dependencies of task in the

dataset

Experimental results show that for various datasets our proposed algorithm that is Predict Earliest Finish Time with Genetic Algorithm exhibits a very good performance.

Table 1 contains values obtained from different datasets. It shows that our proposed algorithm PEFTGA gives better makespan time as compared to standard genetic algorithm. The following figures shows the study of makespan time of various scientific workflows in milliseconds.

Table 1. GA Parameters

Parameter Value

Number of Processors 4

Number of Iterations 5

Crossover Type Two-Point Crossover Crossover Probability 0.3

Mutation Type Simple Swap

Mutation Probability 0.3

Termination Condition Number of Iterations

Table 2. Improvement in makespan comparing GA & PEFTGA

Sr. No.

Datasets GA PEFTGA (GA-PEFTGA)/GA

1 Peft_Paper 18.64ms 15.69ms 15.82%

2 Montage_25 33.02ms 19.22ms

41.79% 3 Montage_50 51.04ms 33.37ms 34.61% 4 Montage_100 69.02ms 83.87ms 14.43% 5 Inspiral_30 810.16ms 414.60ms

48.82% 6 Inspiral_50 1243.73ms 785.79ms 36.81% 7

Inspiral_100

1916.08ms

1875.41ms

2.12%

Fig.2. Peft_Paper

From figure 2, the proposed PEFTGA algorithm is applied on the dataset given in PEFT_PAPER [8] and result obtained shows that our algorithm gives 15.82% lesser makespan than original GA. The dataset in this paper have 10 inter dependent tasks with different execution time and cost as shown in PEFT_PAPER [8].

Fig.3. Montage_25

Fig.4. Montage_50

From figure 3, the proposed PEFTGA algorithm is applied on montage dataset with 25 inter dependent tasks with different execution cost and time. The result

obtained gives 41.79% lesser makespan than original GA. From figure 4, same PEFTGA algorithm is applied on Montage_50 dataset containing 50 inter dependent tasks with different execution time and cost and the result gives 34.61% lesser makespan than original GA. Here dependencies are of same type but number of tasks are more. It proves that performance of our algorithm not only depends on number of tasks but also depends on inter connectivity of tasks.

Fig.5. Montage_100

From figure 5, the proposed PEFTGA algorithm is applied on Montage_100 dataset which contains 100 tasks and is more complex in terms of inter dependencies and the result gives 14.43% lesser makespan than original GA. The proposed algorithm is tested using various datasets with different number of tasks with different inter connectivity as shown in the following figure.

Fig.6. Inspiral_30

From figure 6, Inspiral_30 dataset containing 30 tasks with various dependencies the proposed PEFTGA algorithm is applied and the result gives 48.82% lesser makespan than original GA. As we increase the number of tasks improvement is lesser from the above results as

compared to Montage_25 dataset with 25 tasks where there is more improvement. This improvement is because of variation in dependencies.

Fig.7. Inspiral_50

From figure 7, to understand that how number of tasks affects the result we used same type of Inspiral dataset with 50 inter connected tasks. And after applying the proposed PEFTGA algorithm for Inspiral_50 dataset it gives 36.81% lesser makespan than original GA which proves that our proposed algorithm is better than original genetic algorithm. It also shows less improvement as compared to improvement in inspiral_30 with 30 tasks due to varying number of tasks. In this dataset we can see great improvement because of less number of tasks. So by increasing the number of tasks we analyze the variation in the improvement in makespan.

Fig.8. Inspiral_100

From figure 8, the proposed PEFTGA algorithm for Inspiral_100 dataset gives 2.12% lesser makespan than original GA. Since it is proved that the proposed algorithm gives less makespan, therefore a comparison with the baseline Genetic algorithm have been observed for completion time.

VI.C ONCLUSION

Cloud computing has to deliver high performance in case of computing resources over the internet for workflows. Task scheduling is one of the major issues in cloud computing. To minimize this issue, we have used Predict earliest Finish Time Algorithm (PEFT) and genetic algorithm in our research work. Genetic algorithm produces inefficient results because of randomly generation of initial population. Hence we have modified it using Predict Earliest Finish Time Algorithm (PEFT) for generating its initial population. Our proposed algorithm Predict Earliest Finish Time Genetic Algorithm (PEFTGA) targets to reduce total completion time (makespan) of workflow and maximize resource utilization. We have compared our proposed algorithm with standard genetic algorithm. The results show that PEFTGA performs better scheduling of tasks on virtual machines in terms of makespan. The completion time (makespan) for the proposed PEFTGA algorithm is reduced by average 25% compared to standard GA. Since cost is proportional to the execution time hence cost of the proposed PEFTGA also gets reduced as compared to default GA. From the results, we can conclude that as compared to the original genetic algorithm PEFTGA shows the best performance for the static scheduling of directed acyclic graphs (DAGs) in heterogeneous systems. As we can see that if number of tasks are more, improvement is less according to above figures and also if inter dependency among tasks in the datasets varies results also vary. Hence from our research work we can conclude that our proposed algorithm is best suitable for workflows with less number of tasks. So to overcome these problems in the future we would like to consider other parameters like execution costs, termination delay of virtual machines, energy consumption on data centers and data transfer costs between data centers, average makespan values and number of processors available etc. to make it more suitable for large size of datasets with complex inter dependencies of tasks.

R EFERENCES

[1]Mell Peter and Tim Grance “The NIST definition of

cloud computing” Computer Security Division

Information Technology Laboratory, National Institute of

Standards and Technology Gaithersburg, pp. 20-23, Year

2011.

[2]Kaur P.D. I. “Unfolding the distributed computing

paradigm”. International Conference on Advances in

Computer Engineering, pp. 339-342 (2010).

[3]Gibson, Joel, Robin Rondeau, Darren Eveleigh, and Qing

Tan.“Benefits and challenges of three cloud computing

service models” Fourth International Conference on

Computational Aspects of Social Networks, IEEE, pp.

198-205, Year 2012.

[4]Silva J.N. Veiga L. Ferreira P.: “Heuristic s for

Resource Allocation on Utility Computating

Infrastructures ” 6th International Workshop on

Middleware for Grid Computing, New York (2008).

[5]Bridi,T., Bartolini,A., Lombardi, M., Milano, M., and

Benini L. “A Constraint Programming Scheduler for

Heterogeneous High-Performance Computing Machines ”

pp. 1–14, (2016).

[6]Meena J. Kumar M. M. “Cost Effective Genetic

Algorithm for Workflow Scheduling in Cloud Under

Deadline Constraint ” vol. 4 pp. 5065–5082, (2016).

[7]Verma A. “Cost Minimized PSO based W orkflow

Scheduling Plan for Cloud Computing ” I.J. Information

Technology and Computer Science, 08, pp. 37–43, (2015).

[8]Arabnejad., H., and Barbosa G.J. “List Scheduling

Algorithm for Heterogeneous Systems by an Optimistic

Cost Table ” IEEE Transactions on Parallel and

Distributed Systems, Vol: 25(3) March (2014).

[9]Topcuoglu H. Hariri S. and Wu M. “Performance-

effective and low-complexity task scheduling for he terogeneous computing ” IEEE transaction on Parallel and Distributed System, vol. 13, no. 3, pp. 260–274, (2002).

[10]Daoud, I.M., and Kharma, N., “A Hybrid Heuristics-

Genetic Algorithm for Task Scheduling in Heterogeneous Processor Networks” Journal of Parallel and Distributed Computing, Vol. 71(11), pp. 1518-1531, (2011).

[11]Kaur S. and Verma “An Efficient Approach to Genetic

Algorithm for Task Scheduling in Cloud Computing Environment ” I.J. of Information Technology and Computer Science, pp. 74–79, (2012).

[12]Ahmad,G.S., Munir,U.E., Nisar, W., Avenue, Q., and

Cantt W. “PEGA: A Performance Effective Genetic Algorithm for Task Scheduling in Heterogeneous Systems ” IEEE 14th International Conference on High Performance Computing and Communications, pp. 1082–

1087, (2012).

[13]Shekhar Singh and Mala kalra “Scheduling of

Independent tasks in cloud computing using modified genetic algorit hm” IEEE Pages: 565 - 569, DOI:

10.1109/CICN.2014.128, Year 2014.

[14] A Saima Gulzar Ahmad, Ehsan Ullah Munir, and Wasif

Nisar “A Performance Effective Genetic Algorithm for Task Scheduling in Heterogeneous Systems (PEGA)”

IEEE, Year: 2012, Pages: 1082 - 1087, DOI:

10.1109/HPCC.2012.158, Year 2012.

[15]Chuan Wang, Jianhua Gu,Yunlan Wang, and Tianhai

Zhao “Hybrid Heuristic-Genetic Algorithm for Task Scheduling in Heterogeneous Multi-Core System (HSCGS)” springer DOI: 10.1007/978-3-642-33078-

0_12, Year 2012.

[16]Sa eid Abrishami Mahmoud Naghibzadeh “Deadline

constrained Workflow Scheduling Algorithms for Infrastructure as a Service Clouds” IEEE Volume 29 Issue 1, Pages 158-169, Year 2013.

[17]Amandeep Verma and Sakshi Kaushal “Deadline

constraint heuristic-based Genetic Algorithm for Workflow Scheduling in Cloud” IEEE Volume 5 Issue 2 Pages 96-106, Year 2014.

[18]Beibei Zhu Hongze “Modified genetic algorithm for

DAG scheduling in grid systems” IEEE Pages: 465 - 468, DOI: 10.1109/ICSESS.2012.6269505, Year 2012.

[19]S. S elvarani and G. Sudha Sadhasivam “Improved cost-

based algorithm for task scheduling in cloud computing”

IEEE, Pages: 1 - 5, DOI: 10.1109/ICCIC.2010.5705847, Year 2010.

Authors’ Profiles

Rohit Nagar lives in Delhi, India. He was

born on November 9, 1988. He pursued his

M. Tech at the Department of Computer

Science & Engineering, Dr. B.R. Ambedkar

National Institute of Technology, Jalandhar.

Currently he is working in Infozech

Software Pvt. Limited, New Delhi as Data Analyst. In research, his current interests includes Cloud Computing and Data Analytics.

Deepak Kumar Gupta is Associate

Professor at the Department of Computer

Science & Engineering, Dr. B. R. Ambedkar

National Institute of Technology, Jalandhar.

He has total 29 years of experience including

teaching and industries. In research, his

current interests includes Social Media, Data Analytics and Operating System.

Raj Mohan Singh received his M Tech

degree in Computer Science and

Engineering from Dr. B. R. Ambedkar

National Institute of Technology, Jalandhar.

His research interests includes Cloud

Computing, Operating System, Data

Analytics and Distributed Computing. How to cite this paper: Rohit Nagar, Deepak K. Gupta, Raj M. Singh, "Time Effective Workflow Scheduling using Genetic Algorithm in Cloud Computing", International

Journal

of Information

Technology and Computer Science(IJITCS), Vol.10, No.1, pp.68-75, 2018. DOI: 10.5815/ijitcs.2018.01.08

云计算数据中心调度算法研究

云计算数据中心资源调度关键技术研究 项目背景 云计算是建立在计算机界长期的技术积累基础之上,包括软件和平台作为一种 服务,虚拟化技术和大规模的数据中心技术等关键技术。数据中心(可能是分布在 不同地理位置的多个系统)是容纳计算设备资源的集中之地同时负责对计算设备的能源提供和空调维护等。数据中心可以是单独建设也可以置于其他建筑之内。动态分配管理虚拟和共享资源在新的应用环境--云计算数据中心里面临新的挑战,因为云计算应用平台的资源可能分布广泛而且种类多样,加之用户需求的实时动态变化 很难准确预测,以及需要考虑系统性能和成本等因素使得问题非常复杂。需要设计高效的云计算数据中心调度算法以适应不同的业务需求和满足不同的商业目标。目前的数据中心调度算法依据具体的应用(计算资源,存储,搜索,海量信息处理等)不同采用不同策略和算法。提高系统的响应速度和服务质量是数据中心的关键技术指标,然而随着数据中心规模的不断扩大,能源消耗成为日益严重和备受关注的问 题,因为能源消耗对成本和环境的影响都极大。总的发展趋势是从简单的粗旷的 满足功能/性能需求的方式向精细的优化节能的方向发展。

2云计算数据中心资源调度方案分析 2.1 Google 解决方案 Google 也许是业界最早使用和发起云计算的厂家之一。因商业保密,其大部 分技术实现内容并未被外界了解。 从其公开发表的文献可及了解到其关于云数据中 心,搜索引擎网络设计,分布式文件系统以及并行处理模式 MapReduce 的概要设 计。Google 云计算平台架构,其基础平台是建立在 Map Reduce 结构之上。利用了 类似Hadoop 的资源调度管理方法。不过 Google 自己设计了文件系统( GFS hunkserver ),数据库系统(BigTable )以及其它相关系统。 2.2 Amazo n 解决方案 Amazon 目前被认为推广云计算应用最为成功的厂家之一。 它成功地推出了 EC2(弹性云计算),SQS (简单消息存储服务),S3(简单存储服务),SimpleDB (简单 数据库)等近十种云服务。Amazon 的云计算平台体系结构,其中(EBS: Elastic Block Service, Providi ng the Block In terface, Stori ng Virtual Mach ine Images )。 2.3 IBM 解决方案 的蟻㈱Q. 图一.多数据中心调度算法的参考体系结构

MATLAB实验遗传算法和优化设计

实验六 遗传算法与优化设计 一、实验目的 1. 了解遗传算法的基本原理和基本操作(选择、交叉、变异); 2. 学习使用Matlab 中的遗传算法工具箱(gatool)来解决优化设计问题; 二、实验原理及遗传算法工具箱介绍 1. 一个优化设计例子 图1所示是用于传输微波信号的微带线(电极)的横截面结构示意图,上下两根黑条分别代表上电极和下电极,一般下电极接地,上电极接输入信号,电极之间是介质(如空气,陶瓷等)。微带电极的结构参数如图所示,W 、t 分别是上电极的宽度和厚度,D 是上下电极间距。当微波信号在微带线中传输时,由于趋肤效应,微带线中的电流集中在电极的表面,会产生较大的欧姆损耗。根据微带传输线理论,高频工作状态下(假定信号频率1GHz ),电极的欧姆损耗可以写成(简单起见,不考虑电极厚度造成电极宽度的增加): 图1 微带线横截面结构以及场分布示意图 {} 28.6821ln 5020.942ln 20.942S W R W D D D t D W D D W W t D W W D e D D παπππ=+++-+++?????? ? ??? ??????????? ??????? (1) 其中πρμ0=S R 为金属的表面电阻率, ρ为电阻率。可见电极的结构参数影响着电极损耗,通过合理设计这些参数可以使电极的欧姆损耗做到最小,这就是所谓的最优化问题或者称为规划设计问题。此处设计变量有3个:W 、D 、t ,它们组成决策向量[W, D ,t ] T ,待优化函数(,,)W D t α称为目标函数。 上述优化设计问题可以抽象为数学描述: ()()min .. 0,1,2,...,j f X s t g X j p ????≤=? (2)

遗传算法与优化问题(重要,有代码)

实验十遗传算法与优化问题 一、问题背景与实验目的 遗传算法(Genetic Algorithm—GA),是模拟达尔文的遗传选择和自然淘汰的生物进化过程的计算模型,它是由美国Michigan大学的J.Holland教授于1975年首先提出的.遗传算法作为一种新的全局优化搜索算法,以其简单通用、鲁棒性强、适于并行处理及应用范围广等显著特点,奠定了它作为21世纪关键智能计算之一的地位. 本实验将首先介绍一下遗传算法的基本理论,然后用其解决几个简单的函数最值问题,使读者能够学会利用遗传算法进行初步的优化计算.1.遗传算法的基本原理 遗传算法的基本思想正是基于模仿生物界遗传学的遗传过程.它把问题的参数用基因代表,把问题的解用染色体代表(在计算机里用二进制码表示),从而得到一个由具有不同染色体的个体组成的群体.这个群体在问题特定的环境里生存竞争,适者有最好的机会生存和产生后代.后代随机化地继承了父代的最好特征,并也在生存环境的控制支配下继续这一过程.群体的染色体都将逐渐适应环境,不断进化,最后收敛到一族最适应环境的类似个体,即得到问题最优的解.值得注意的一点是,现在的遗传算法是受生物进化论学说的启发提出的,这种学说对我们用计算机解决复杂问题很有用,而它本身是否完全正确并不重要(目前生物界对此学说尚有争议). (1)遗传算法中的生物遗传学概念 由于遗传算法是由进化论和遗传学机理而产生的直接搜索优化方法;故而在这个算法中要用到各种进化和遗传学的概念. 首先给出遗传学概念、遗传算法概念和相应的数学概念三者之间的对应关系.这些概念如下: 序号遗传学概念遗传算法概念数学概念 1 个体要处理的基本对象、结构也就是可行解 2 群体个体的集合被选定的一组可行解 3 染色体个体的表现形式可行解的编码 4 基因染色体中的元素编码中的元素 5 基因位某一基因在染色体中的位置元素在编码中的位置 6 适应值个体对于环境的适应程度, 或在环境压力下的生存能力可行解所对应的适应函数值 7 种群被选定的一组染色体或个体根据入选概率定出的一组 可行解 8 选择从群体中选择优胜的个体, 淘汰劣质个体的操作保留或复制适应值大的可行解,去掉小的可行解 9 交叉一组染色体上对应基因段的 交换根据交叉原则产生的一组新解 10 交叉概率染色体对应基因段交换的概 率(可能性大小)闭区间[0,1]上的一个值,一般为0.65~0.90 11 变异染色体水平上基因变化编码的某些元素被改变

遗传算法和蚁群算法的比较

全局优化报告 ——遗传算法和蚁群算法的比较 某:X玄玄 学号:3112054023 班级:硕2041

1遗传算法 1.1遗传算法的发展历史 遗传算法是一种模拟自然选择和遗传机制的寻优方法。20世纪60年代初期,Holland教授开始认识到生物的自然遗传现象与人工自适应系统行为的相似性。他认为不仅要研究自适应系统自身,也要研究与之相关的环境。因此,他提出在研究和设计人工自适应系统时,可以借鉴生物自然遗传的基本原理,模仿生物自然遗传的基本方法。1967年,他的学生Bagley在博士论文中首次提出了“遗传算法”一词。到70年代初,Holland教授提出了“模式定理”,一般认为是遗传算法的基本定理,从而奠定了遗传算法的基本理论。1975年,Holland出版了著名的《自然系统和人工系统的自适应性》,这是第一本系统论述遗传算法的专著。因此,也有人把1975年作为遗传算法的诞生年。 1985年,在美国召开了第一届两年一次的遗传算法国际会议,并且成立了国际遗传算法协会。1989年,Holland的学生Goldberg出版了《搜索、优化和机器学习中的遗传算法》,总结了遗传算法研究的主要成果,对遗传算法作了全面而系统的论述。一般认为,这个时期的遗传算法从古典时期发展了现代阶段,这本书则奠定了现代遗传算法的基础。 遗传算法是建立在达尔文的生物进化论和孟德尔的遗传学说基

础上的算法。在进化论中,每一个物种在不断发展的过程中都是越来越适应环境,物种每个个体的基本特征被后代所继承,但后代又不完全同于父代,这些新的变化,若适应环境,则被保留下来;否则,就将被淘汰。在遗传学中认为,遗传是作为一种指令遗传码封装在每个细胞中,并以基因的形式包含在染色体中,每个基因有特殊的位置并控制某个特殊的性质。每个基因产生的个体对环境有一定的适应性。基因杂交和基因突变可能产生对环境适应性强的后代,通过优胜劣汰的自然选择,适应值高的基因结构就保存下来。遗传算法就是模仿了生物的遗传、进化原理,并引用了随机统计原理而形成的。在求解过程中,遗传算法从一个初始变量群体开始,一代一代地寻找问题的最优解,直到满足收敛判据或预先假定的迭代次数为止。 遗传算法的应用研究比理论研究更为丰富,已渗透到许多学科,并且几乎在所有的科学和工程问题中都具有应用前景。一些典型的应用领域如下: (1)复杂的非线性最优化问题。对具体多个局部极值的非线性最优化问题,传统的优化方法一般难于找到全局最优解;而遗传算法可以克服这一缺点,找到全局最优解。 (2)复杂的组合优化或整数规划问题。大多数组合优化或整数规划问题属于NP难问题,很难找到有效的求解方法;而遗传算法即特别适合解决这一类问题,能够在可以接受的计算时间内求得满意的近似最优解,如著名的旅行商问题、装箱问题等都可以用遗传算法得到满意的解。

云计算中基于cloudsim的蚁群调度算法研究

龙源期刊网 https://www.sodocs.net/doc/df12970294.html, 云计算中基于cloudsim的蚁群调度算法研究 作者:张翰林谢晓燕 来源:《电脑知识与技术》2016年第03期 摘要:介绍了云计算仿真工具cloudsim,在描述其架构的基础上,实现了cloudsim模拟云环境下调度策略的过程。引入蚁群算法,并基于蚁群算法实现了对cloudsim中调度策略的拓展,并与轮循、贪心等传统代数算法进行对比分析测试。结果表明,蚁群算法在应对云计算中海量任务和数据处理时,由于传统代数算法。 关键词:云计算,cloudsim,蚁群算法 中图分类号:TP393 文献标识码:A 文章编号:1009-3044(2016)03-0219-02 云计算按照服务类型,大致可以分为三类:将基础设置作为服务Iaas、将平台作为服务paas、将软件作为服务saas。然而,不管何种类型的云计算服务,都有不同的、负责的组件,配置环境和部署条件的要求,因此,在异构真实的云环境下,对云端调度分配策略的优劣的评价,以及由调度策略所带来的云端设备的复合、节能、系统规模性能进行量化、评价是非常不易的。所以,本文引入云计算仿真工具Cloudsim,构建一个云环境下的分布式系统模拟器来实现云计算试验的模拟。 与此同时,目前广泛应用于云计算的如先到先服务FCFS算法、Greedy贪心算法[2]等,由于算法本身的特点,均是传统代数算法静态建模完成的,并不能针对网络中各种不确定变化做出对应的调整。而蚁群优化算法作为一种智能算法,在经过多次迭代后,任务必然能分配给一个合理的虚拟机。因此,本文在介绍Cloudsim架构、工作原理的同时,通过cloudsim搭建了一个云计算平台,并在此平台下,对FCFS算法、Greedy贪心算法以及蚁群优化算法进行的对比测试和分析。结果证明蚁群优化算法对于网络中突发情况的应对是较优的。 1 cloudsim介绍 1.1 cloudsim体系结构 Cloudsim是澳大利亚墨尔本大学Rajkumar Buyya教授领导团队开发的云计算仿真器,是一个通用的、可拓展的支持建模和模拟的仿真框架,并能进行云计算基础设施和管理服务的实验。其体系结构[1]如图: 1.2 cloudsim技术实现

云计算中任务调度算法的研究综述

云计算中任务调度算法的研究综述-电子商务论文 云计算中任务调度算法的研究综述 文/张艳敏 摘要:云计算中任务调度算法的好坏直接影响云计算系统整体性能,也影响着云计算系统处理用户提交的任务的能力。本文归纳了云计算调度的特点和性能指标,总结了云计算中的任务调度算法,分析了云计算任务调度算法的研究现状及其进展。最后讨论了现有任务调度策略存在的问题,为云调度研究指明了方向和思路。 关键词:云计算;任务调度;遗传算法;蚁群算法 前言 云计算是一种基于互联网的新的服务模式,这种模式按使用量付费,提供可用的、便捷的、按需的网络访问,它将用户需求的计算任务分布在由大量计算机构成的数据中心,数据中心采用虚拟化技术,把各种软硬件资源抽象为虚拟化资源,再通过资源调度技术使各种应用能够根据需要获取计算能力、存储空间和信息服务。 在云计算环境中,一个大规模计算任务需要进行分布式并行处理,系统首先将逻辑上完整的一个大任务切分成多个子任务,然后根据任务的相应信息采取合适的调度算法,在不同的资源节点上运行这些子任务,所有的子任务处理完后进行汇总,最后将结果传给用户。云计算任务调度的目的是给需要的用户分配不同的资源,在某一特定的云环境下,依据某一种规则使用资源,在不同的用户之间平衡和调整资源,在满足用户需求的前提下,使得任务完成时间尽量小,且资源利用率尽量高。调度最终要实现时间跨度、服务质量、负载均衡、经济原则最

优等目标。云计算任务调度是云计算研究中的重点和难点。任务调度算法的优劣会影响到云计算系统处理任务的能力。近几年,研究者针对云环境下的资源调度做了很多研究,主要体现在以提高云计算数据中资源利用率为宗旨的资源管理与调度、以降低云计算数据中心的能耗为目标的资源分配与调度、经济学的云资源管理模型研究等方面。 本文综述了云环境下的任务调度算法,分析了近几年来典型的云计算任务调度算法的发展趋势,为相关领域的研究人员提供参考。 1、网格任务调度与云计算任务调度的比较 在网格计算和云计算中,虽然系统资源都是以数据池的形式呈现给用户,但它们之间的区别是网格用户的任务是通过实际的物理资源来执行,而云计算环境下的用户任务是通过逻辑意义上的虚拟资源来执行。对于以上两种计算方式,都是由用户将任务提交给计算中心,系统通过对任务的需求进行分析,然后来寻找合适的资源节点执行,此时的用户并不关心执行任务的是哪个节点。网格系统通过用户预先设定的任务并行执行算法,并结合自己的调度系统使用户任务实现跨物理节点并行执行[1],云计算任务调度通常情况不会跨虚拟机并行调度。尽管云计算是在网格计算、分布式计算及并行计算的基础上发展起来的,但是云环境比较复杂,任务呈现多样性,而且是以商业服务作为宗旨。云计算任务调度策略不能照搬传统调度策略来满足用户提出的各种任务要求,必须考虑怎样在高效任务调度与资源分配同时提高经济效益、资源利用率以及用户体验等各方面的因素。可靠的云服务和各层次的用户公平使用资源的机会是云计算调度策略必须考虑的问题,此外还需要有一个调度策略来提供系统可以使用的资源,以便满足多样化的用户需求。因此虚拟化技术在云计算中的广泛应用、中间层与资源节点以

基于强化学习的云计算资源调度策略研究

上海电力学院学报 Journal of Shanghai University of Elect/z Power 第35卷第4期2019年8月Vol. 35,No. 4Aug. 2019 DOI : 10. 3969/j. issn. 1006 -4729.2019. 04. 018 基于强化学习的云计算资源调度策略研究 李天宇 (国网上海电力公司信息通信公司,上海200030) 摘要:提出了一种基于强化学习的云计算虚拟机资源调度问题的解决方案和策略。构建了虚拟机的动态负 载调度模型,将虚拟机资源调度问题描述为马尔可夫决策过程。根据虚拟机系统调度模型构建状态空间和虚 拟机数量增减空间,并设计了动作的奖励函数。采用0值强化学习机制,实现了虚拟机资源调度策略。在云 平台的虚拟机模型中,对按需增减虚拟机数量和虚拟机动态迁移两种场景下的学习调度策略进行了仿真,验 证了该方法的有效性。 关键词:云计算;虚拟机;强化学习;控制策略 中图分类号:TP399 文献标志码:A 文章编号:1006 -4729(2019)04 -0399 -05 ReeearchonCloudCompurnng ReeourceSchedulnng Srraregy Based on Reinforcement Learning LDTianyu (Statr Gri Shanghai Municipal Electric Powes Company ,Shanghai 200030, China ) Aberracr : A solution to cloud computing resourcescheduling problem based on reinforcement learning isproposed8Thedynamicload scheduling model of the virtual machine is constructed ,and thevirtualmachineresourcescheduling problem isdescribed astheMarkov decision proce s 8Ac- cording to thevirtualmachinesystem scheduling model ,thestatespaceand thenumberofvirtual machinesareincreased ordecreased , and thereward function oftheaction isdesigned8The Q-valued reinforcementlearning mechanism isused to implementthevirtualmachineresource scheduling strategy8Fina l y ,in thevirtualmachinemodelofthecloud platform ,theperformanceof thelearning and scheduling strategy isenhanced underthescenariosofincreasing ordecreasing the numberofvirtualmachinesand virtualmachinedynamicmigration8Thee f ectivene s ofthe method is verified8 Key worre : cloud computing ; virtual machine ; reinforcement learning ; control strategy 云计算是一种新兴的领先信息技术,云计算 是在“云”上分配计算任务,通过专用软件实现的 自动化管理使用户能够按需访问计算能力、存储 空间和信息服务,用户可以专注于自己的业务,无 需考虑复杂的技术细节,有助于提高效率、降低成 本和技术创新。云计算研究的关键技术有:虚拟化技术、数据 存储技术、资源管理技术、能源管理技术、云监控技 术等。其中,系统资源调度是云计算中的关键问题 之一。然而,由于云计算平台上应用程序的多样性收稿日期:2018-12-17 通讯作者简介:李天宇(1986—),男,硕士,工程师&主要研究方向为云计算& E-mail :lihanyu@ sh. sgcc. com. cn 。

比较专家系统、模糊方法、遗传算法、神经网络、蚁群算法的特点及其适合解决的实际问题

比较专家系统、模糊方法、遗传算法、神经网络、蚁群算法的特点及其适合解决的实际问题 一、专家系统(Expert System) 1,什么是专家系统? 在日常生活中大家所认知的“专家”一般都拥有某一特定领域的大量专业知识,以及丰富的实际经验。在解决问题时,专家们通常拥有一套独特的思维方式,能较圆满地解决一类困难问题,或向用户提出一些建设性的建议等。 专家系统一般定义为一个具有智能特点的计算机程序。 它的智能化主要表现为能够在特定的领域内模仿人类专家思维来求解复杂问题。因此,专家系统必须包含领域专家的大量知识,拥有类似人类专家思维的推理能力,并能用这些知识来解决实际问题。 专家系统的基本结构如图1所示,其中箭头方向为数据流动的方向。 图1 专家系统的基本组成 专家系统通常由知识库和推理机两个主要组成要素。 知识库存放着作为专家经验的判断性知识,例如表达建议、 推断、 命令、 策略的产生式规则等, 用于某种结论的推理、 问题的求解,以及对于推理、 求解知识的各种控制知识。 知识库中还包括另一类叙述性知识, 也称作数据,用于说明问题的状态,有关的事实和概念,当前的条件以及常识等。

专家系统的问题求解过程是通过知识库中的知识来模拟专家的思维方式的,因此,知识库是专家系统质量是否优越的关键所在,即知识库中知识的质量和数量决定着专家系统的质量水平。一般来说,专家系统中的知识库与专家系统程序是相互独立的,用户可以通过改变、完善知识库中的知识内容来提高专家系统的性能。 推理机实际上是一个运用知识库中提供的两类知识,基于木某种通用的问题求解模型,进行自动推理、 求解问题的计算机软件系统。 它包括一个解释程序, 用于决定如何使用判断性知识推导新的知识, 还包括一个调度程序, 用于决定判断性知识的使用次序。 推理机的具体构造取决于问题领域的特点,及专家系统中知识表示和组织的方法。 推理机针对当前问题的条件或已知信息,反复匹配知识库中的规则,获得新的结论,以得到问题求解结果。在这里,推理方式可以有正向和反向推理两种。正向推理是从前件匹配到结论,反向推理则先假设一个结论成立,看它的条件有没有得到满足。由此可见,推理机就如同专家解决问题的思维方式,知识库就是通过推理机来实现其价值的。 人机界面是系统与用户进行交流时的界面。通过该界面,用户输入基本信息、回答系统提出的相关问题,并输出推理结果及相关的解释等。 综合数据库专门用于存储推理过程中所需的原始数据、中间结果和最终结论,往往是作为暂时的存储区。解释器能够根据用户的提问,对结论、求解过程做出说明,因而使专家系统更具有人情味。 知识获取是专家系统知识库是否优越的关键,也是专家系统设计的“瓶颈”问题,通过知识获取,可以扩充和修改知识库中的内容,也可以实现自动学习功能。 2,专家系统的特点 在功能上, 专家系统是一种知识信息处理系统, 而不是数值信息计算系统。在结构上, 专家系统的两个主要组成部分 – 知识库和推理机是独立构造、分离组织, 但又相互作用的。在性能上, 专家系统具有启发性, 它能够运用专家的经验知识对不确定的或不精确的问题进行启发式推理, 运用排除多余步骤或减少不必要计算的思维捷径和策略;专家系统具有透明性, 它能够向用户显示为得出某一结论而形成的推理链, 运用有关推理的知识(元知识)检查导出结论的精度、一致性和合理性, 甚至提出一些证据来解释或证明它的推理;专家系统具有灵活性, 它能够通过知识库的扩充和更新提高求解专门问题的水平或适应环境对象的某些变化,通过与系统用户的交互使自身的性能得到评价和监护。 3,专家系统适合解决的实际问题 专家系统是人工智能的一个应用,但由于其重要性及相关应用系统之迅速发展,它已是信息系统的一种特定类型。专家系统一词系由以知识为基础的专家系统(knowledge-based expert system)而来,此种系统应用计算机中储存的人类知识,解决一般需要用到专家才能处理的问题,它能模仿人类专家解决特定问题时的推理过程,因而可供非专家们用来增进问题解决的能力,同时专家们也可把它视为具备专业知识的助理。由于在人类社会中,专家资源确实相当稀少,有了专家系统,则可使此珍贵的专家知识获得普遍的应用。 专家系统技术广泛应用在工程、科学、医药、军事、商业等方面,而且成果相当丰硕,甚至在某些应用领域,还超过人类专家的智能与判断。其功能应用领

遗传算法和蚁群算法的比较

全局优化报告——遗传算法和蚁群算法的比较 姓名:玄玄 学号:3112054023 班级:硕2041

1遗传算法 1.1遗传算法的发展历史 遗传算法是一种模拟自然选择和遗传机制的寻优方法。20世纪60年代初期,Holland教授开始认识到生物的自然遗传现象与人工自适应系统行为的相似性。他认为不仅要研究自适应系统自身,也要研究与之相关的环境。因此,他提出在研究和设计人工自适应系统时,可以借鉴生物自然遗传的基本原理,模仿生物自然遗传的基本方法。1967年,他的学生Bagley在博士论文中首次提出了“遗传算法”一词。到70年代初,Holland教授提出了“模式定理”,一般认为是遗传算法的基本定理,从而奠定了遗传算法的基本理论。1975年,Holland出版了著名的《自然系统和人工系统的自适应性》,这是第一本系统论述遗传算法的专著。因此,也有人把1975年作为遗传算法的诞生年。 1985年,在美国召开了第一届两年一次的遗传算法国际会议,并且成立了国际遗传算法协会。1989年,Holland的学生Goldberg 出版了《搜索、优化和机器学习中的遗传算法》,总结了遗传算法研究的主要成果,对遗传算法作了全面而系统的论述。一般认为,这个

时期的遗传算法从古典时期发展了现代阶段,这本书则奠定了现代遗传算法的基础。 遗传算法是建立在达尔文的生物进化论和孟德尔的遗传学说基础上的算法。在进化论中,每一个物种在不断发展的过程中都是越来越适应环境,物种每个个体的基本特征被后代所继承,但后代又不完全同于父代,这些新的变化,若适应环境,则被保留下来;否则,就将被淘汰。在遗传学中认为,遗传是作为一种指令遗传码封装在每个细胞中,并以基因的形式包含在染色体中,每个基因有特殊的位置并控制某个特殊的性质。每个基因产生的个体对环境有一定的适应性。基因杂交和基因突变可能产生对环境适应性强的后代,通过优胜劣汰的自然选择,适应值高的基因结构就保存下来。遗传算法就是模仿了生物的遗传、进化原理,并引用了随机统计原理而形成的。在求解过程中,遗传算法从一个初始变量群体开始,一代一代地寻找问题的最优解,直到满足收敛判据或预先假定的迭代次数为止。 遗传算法的应用研究比理论研究更为丰富,已渗透到许多学科,并且几乎在所有的科学和工程问题中都具有应用前景。一些典型的应用领域如下: (1)复杂的非线性最优化问题。对具体多个局部极值的非线性最优化问题,传统的优化方法一般难于找到全局最优解;而遗传算法可以克服这一缺点,找到全局最优解。 (2)复杂的组合优化或整数规划问题。大多数组合优化或整数规划问题属于NP难问题,很难找到有效的求解方法;而遗传算法即特别

TSP问题的遗传算法求解 优化设计小论文

TSP问题的遗传算法求解 摘要:遗传算法是模拟生物进化过程的一种新的全局优化搜索算法,本文简单介绍了遗传算法,并应用标准遗传算法对旅行包问题进行求解。 关键词:遗传算法、旅行包问题 一、旅行包问题描述: 旅行商问题,即TSP问题(Traveling Saleman Problem)是数学领域的一个著名问题,也称作货郎担问题,简单描述为:一个旅行商需要拜访n个城市(1,2,…,n),他必须选择所走的路径,每个城市只能拜访一次,最后回到原来出发的城市,使得所走的路径最短。其最早的描述是1759年欧拉研究的骑士周游问题,对于国际象棋棋盘中的64个方格,走访64个方格一次且最终返回起始点。 用图论解释为有一个图G=(V,E),其中V是顶点集,E是边集,设D=(d ij)是有顶点i和顶点j之间的距离所组成的距离矩阵,旅行商问题就是求出一条通过所有顶点且每个顶点只能通过一次的具有最短距离的回路。若对于城市V={v1,v2,v3,...,vn}的一个访问顺序为T=(t1,t2,t3,…,ti,…,tn),其中ti∈V(i=1,2,3,…,n),且记tn+1= t1,则旅行商问题的数学模型为:min L=Σd(t(i),t(i+1)) (i=1,…,n) 旅行商问题是一个典型组合优化的问题,是一个NP难问题,其可能的路径数为(n-1)!,随着城市数目的增加,路径数急剧增加,对与小规模的旅行商问题,可以采取穷举法得到最优路径,但对于大型旅行商问题,则很难采用穷举法进行计算。 在生活中TSP有着广泛的应用,在交通方面,如何规划合理高效的道路交通,以减少拥堵;在物流方面,更好的规划物流,减少运营成本;在互联网中,如何设置节点,更好的让信息流动。许多实际工程问题属于大规模TSP,Korte于1988年提出的VLSI芯片加工问题可以对应于1.2e6的城市TSP,Bland于1989年提出X-ray衍射问题对应于14000城市TSP,Litke于1984年提出电路板设计中钻孔问题对应于17000城市TSP,以及Grotschel1991年提出的对应于442城市TSP的PCB442问题。

云计算的核心思想,是将大量用网络连接的计算资源统一管理和调度

云计算2011商业应用三大趋势 云计算的核心思想,是将大量用网络连接的计算资源统一管理和调度,构成一个计算资源池向用户按需服务。提供资源的网络被称为“云”。“云”中的资源在使用者看来是可以无限扩展的,并且可以随时获取,按需使用,随时扩展,按使用付费。云计算的产业三级分层:云软件、云平台、云设备。 众所周知,云计算模式对于企业的意义非比寻常。更具有弹性的IT资源按需分配能够降低IT成本,满足企业对各种技术的需求。但是否采用云计算模式对一个企业来说是一个多方面的决策过程。在与云供应商接洽之前,CIO或IT 决策者要明确自己企业基础设施的虚拟化程度,了解企业是否已经做好了迎接云计算的准备。 在云计算世界里云计算有多种模式:公有云、私有云和混合云,对于这三种模式你的企业要选择哪种模式?哪一种模式才适合你的企业?你是选择云模式的软件还是对整个基础设施进行云计算模式的转变呢?你是“正式”云买家(IT企业)还是“非正式”买家(SMB中小市场)呢?你选择的云模式会给你带来什么样的影响呢? 云服务有多种不同的模式,名字都叫“即服务”,有SaaS(软件即服务)、IaaS(基础设施即服务)和PaaS(平台即服务)。根据Forreser的预测,2011年企业应用将会在以下三个方面发生转变。 在一份Forrester作出的关于企业和中小企业决策者的调查中显示,在是否有采用IaaS云计算模式的计划的问题上,有16%的“非正式”买家表示他们已经在部署IaaS模式的云计算,10%的调查者表示将会在下一年部署。相比之下,有6%的正式买家表示他们已经部署了IaaS,只有7% 表示将会在下一年部署IaaS模式云计算。为虚拟化服务器服务、按需付费的IaaS模式提供商有亚马逊网页服务、Terremark、Savvis和Rackspace。 企业IT买家对虚拟化的关注胜于云计算。考虑到IT环境的规模和众多陈旧的后端系统,以及企业现行的规则,在高水平的企业IT部门内整体迁移到云计算模式下是不太现实的。根据Forrester报告,企业更青睐通过服务器虚拟化整合数据中心,而不是单纯的公有云和私有云的整合。 据2010年Forrester调查数据显示,80%的企业决策者表示优先考虑通过服

基于成本的云计算任务调度策略

基于成本的云计算任务调度策略 云计算服务的商用对用户来说最关键的是成本问题。文章提出了基于粒子群算法的云计算任务调度策略。采用了间接编码的方式,设置参数,考虑经济成本和时间成本因素,选取了适应度函数,实验结果表明,文章算法具有较强的寻优能力,可以解决云计算任务调度问题。 标签:云计算;任务调度;成本粒子群算法 引言 在这大数据的时代,云计算已是学术界、商界的新贵。虽然云计算技术在商业中应用的比较广泛,但是就云计算技术,还有许多需要完善和改进的。云计算是一种商业计算模型,它将计算任务分布在大量计算机构成的资源池上,是各种应用系统能够根据需要获取计算力、存储空间和信息服务。 1 任务调度问题描述 在云计算环境下,一个大规模的任务计算必须在逻辑上划分成许多个子任务进行,然后通过处理子任务来完成主任务。任务调度是将云计算中用户提交的任务请求分配到多个资源的过程。在云计算的应用中,大多数是商业的应用,因此在云计算的任务调度更多的考虑成本指标,同时满足用户的需求。成本由时间成本和经济成本等组成。 2 基于本文算法的云计算任务调度 粒子群优化算法(Particle Swarm Optimization,簡称PSO)是由美国的J.Kennedy 博士和R.C.Eberhart受鸟群觅食行为的启发提出的一種基于群体智能的优化算法。因算法程序结构简单、需要调节的参数较少、高效等特点,被广泛应于到科学研究。 2.1 粒子编码方式 本文采用间接编码方式,采用离散数值编码,编码长度等于子任务数量。设有M个任务,N个资源,每个任务又划分为多个子任务。 子任务的总数量: 其中,TNum(t)为第t 个任务划分子任务的个数。 对每个子任务的编码方式为: 采用自然数编码,即按任务自然数顺序进行编码。第i个任务中的第j 个子

遗传算法及蚂蚁算法作业

(1)用遗传算法来做: 第一步:确定决策变量及其约束条件 s.t. -5<=x<=5 第二步:建立优化模型 第三步:确定编码方法,用长度为50位的二进制编码串来表示决策 变量x 第四步:确定解码方法 第五步:确定个体评价方法 个体的适应度取为每次迭代的最小值的绝对值加上目标函数值,即 第六步:确定参数 本题种群规模n=30,迭代次数ger=200,交叉概率pc=0.65,变异概率 pm=0.05 代码: clear all; close all; clc; tic; n=30; ger=200; pc=0.65; pm=0.05; % 生成初始种群

v=init_population(n,50); [N,L]=size(v); disp(sprintf('Number of generations:%d',ger)); disp(sprintf('Population size:%d',N)); disp(sprintf('Crossover probability:%.3f',pc)); disp(sprintf('Mutation probability:%.3f',pm)); % 待优化问题 xmin=-5; xmax=5; ymin=-5; ymax=5; f='-(2-exp(-(x.^2+y.^2)))'; [x,y]=meshgrid(xmin:0.1:xmax,ymin:0.1:ymax); vxp=x; vyp=y; vzp=eval(f); figure(1); mesh(vxp,vyp,-vzp); hold on; grid on; % 计算适应度,并画出初始种群图形x=decode(v(:,1:25),xmin,xmax);

云计算环境下资源调度关键技术研究

云计算环境下资源调度关键技术研究 发表时间:2019-01-16T10:03:36.797Z 来源:《电力设备》2018年第26期作者:李凯常春雷马斌马军 [导读] 摘要:云计算作为企业核心技术支撑,为企业信息系统提供包括按需供给、快速发布、弹性伸缩、跨域协同计算、故障自愈、开发运维一体化和多租户等能力支撑。 (国网新疆电力有限公司信息通信公司新疆乌鲁木齐 830000) 摘要:云计算作为企业核心技术支撑,为企业信息系统提供包括按需供给、快速发布、弹性伸缩、跨域协同计算、故障自愈、开发运维一体化和多租户等能力支撑。本文研究了企业云计算环境下资源调度关键技术,为企业信息系统可靠稳定运行提供支撑。 关键词:云计算环境;资源调度关键技术;研究 随着信息技术的快速发展,云计算得以崛起,云计算提供包括按需供给、快速发布、弹性伸缩、跨域协同计算、故障自愈、开发运维一体化和多租户等功能,特别是在海量数据信息处理方面,云计算主要是新型软件技术,其具备虚拟性以及并行计算等特征,可以对资源信息进行整合调度。 一、云计算 云计算主要是资源信息服务形式的创新与改革,其在互联网在宣传之后被人们所知道。云计算概念体现在两方面,对于狭义方面而言,主要是把互联网当做是根据,依照用户的需要情况获得更多的资源;对于广义方面而言,能够理解成是一种服务交付以及使用的方式,也就是说,经过互联网手段获得相应的服务,此服务的规模比较大,并且具备比较高的可靠性。因此,云计算是资源的整合调度以及管理,并且根据用户的实际需求提供资源服务。云计算是新型的商业计算形式,其可以把计算任务经过分配到资源池当中,而用户能够依照自身的实际需要,得到资源信息处理以及空间储存等方面的服务。 云计算平台根据服务手段进行分析,能够分成三种,分别是公有云、私有云以及混合云,其一,公用云主要是公众所研发的云模式,其是现阶段许多用户所青睐的方式,其是第三方提供商所运转的,能够为用户提供多样性资源,优势条件是成本低且规模大。用户在对资源进行使用的时候,不需要过多的投入,主要是提供商负责运转,其在价格、功能与规模方面的潜力非常大,变成了云计算的主要发展趋势。其二,私有云是企业共享云服务的主要方式,内部成员是云平台的唯一用户,和传统型数据中心进行对比,此模式需要整合多种资源信息,有效降低其架构的繁杂情况。因为企业内部人员对数据信息的管理与控制,在服务质量方面的表现非常突出,有效提升了企业的经营水平。其三,混合云是前两者的有效结合,企业能够依照应用属性的差异性,将其部署在各个云平台当中,并且制定相应的对策,混合云的市场空间比较大。 二、云数据中心 在云计算大环境之下,数据中心主要是虚拟技术的非静态资源库,其构成包含储存、互联网以及计算等方面的资源。在云计算环境下的数据中心,与传统型数据处理以及储存中心不一样,其规模非常大,并且资源信息量也逐步增多。运数据中心肩负着信息储存以及服务等功能,一方面明显提升了数据中心的性能,另一方面也面临着很大程度上的挑战,怎样提升资源的使用效率,有效降低能耗情况,变成了现阶段我国所重视的问题。 多样性的云数据中心组成了云系统,用户的请求主要是数据中心共同实现的,各个数据中心的拓宽性非常好,能够依照业务活动进行有效调整,例如,增减资源信息的数量等。 三、云计算关键技术 云计算主要是随着虚拟技术、管理技术以及储存技术所崛起的,这样的技术在云计算平台的实际运转过程当中占据着非常重要的位置。 (一)虚拟技术 虚拟技术能够有效分离硬件和软件,其包含两方面,一方面其能够把资源分成多个,另一方面也能够实现虚拟资源的有效整合。云计算当中经常使用的模式是一个硬件系统上的多个软件,软件主要有一只调度器展开管理。 (二)分布储存技术 分布储存技术能够保证数据信息的完整性以及可靠性,其系统是由主、块服务器构成的,主服务器只是对元数据进行储存,应用此存储手段能够有效提升系统的质量以及效果,避免出现服务器超载的现象。 (三)数据管理技术 云计算可以对数据集进行处理以及分析,把特定的数据信息挖掘出来,这些问题是云计算继续解决的重点内容所以,数据管理技术显得尤为重要。此技术可以对数据管理方式进行全面优化,保证数据信息的更新以及读取。在此领域当中,一般情况下,使用数据库储存信息内容。 三、云计算环境下资源调度关键技术研究 (一)云计算环境下的资源调度概念 资源调度主要指的是把运输局中心的资源分散到多个云应用当中,其能够全面实现资源利用效率以及时间两方面的目标,其是网格计算的发展,可以对网格计算进行借鉴,但是两种调度办法并不一样,云计算资源调度过程中需要对多方面的因素进行考虑,第一,根据需要提供服务;第二,考虑其成本问题。 传统资源调度指的是在相应规则的指导之下,根据用户进行资源调整,其服务方式是使用计算节点解决用户请求,应该依照资源与业务约束,明确资源和业务之间的关联性。在云计算大环境之下,业务活动的资源需要并不一样,因此,使用科学合理的资源调度手段,可以把业务活动分配到有效的节点当中展开处理,与此同时,保障云计算的功能。 (二)资源调度分类 首先,根据调度方式进行分类,能够分成动态与静态两种,其中静态调动手段主要是把任务根据相应的任务分配到资源节点上展开处理,而动态调度手段依照储存情况明确其方案,并且做好调整工作。 其次,根据任务处理形式进行分类,其能够分成在线以及批处理两种,其中在线调度是系统在得到任务请求的时候,把资源信息调配当做是重要任务。而批处理调度是在事件触发的情况下,把之前得到的任务进行集中处理。

遗传算法与优化问题

遗传算法与优化问题 (摘自:华东师范大学数学系;https://www.sodocs.net/doc/df12970294.html,/) 一、问题背景与实验目的 二、相关函数(命令)及简介 三、实验内容 四、自己动手 一、问题背景与实验目的 遗传算法(Genetic Algorithm—GA),是模拟达尔文的遗传选择和自然淘汰的生物进化过程的计算模型,它是由美国Michigan大学的J.Holland教授于1975年首先提出的.遗传算法作为一种新的全局优化搜索算法,以其简单通用、鲁棒性强、适于并行处理及应用范围广等显著特点,奠定了它作为21世纪关键智能计算之一的地位. 本实验将首先介绍一下遗传算法的基本理论,然后用其解决几个简单的函数最值问题,使读者能够学会利用遗传算法进行初步的优化计算. 1.遗传算法的基本原理 遗传算法的基本思想正是基于模仿生物界遗传学的遗传过程.它把问题的参数用基因代表,把问题的解用染色体代表(在计算机里用二进制码表示),从而得到一个由具有不同染色体的个体组成的群体.这个群体在问题特定的环境里生存竞争,适者有最好的机会生存和产生后代.后代随机化地继承了父代的最好特征,并也在生存环境的控制支配下继续这一过程.群体的染色体都将逐渐适应环境,不断进化,最后收敛到一族最适应环境的类似个体,即得到问题最优的解.值得注意的一点是,现在的遗传算法是受生物进化论学说的启发提出的,这种学说对我们用计算机解决复杂问题很有用,而它本身是否完全正确并不重要(目前生物界对此学说尚有争议).

(1)遗传算法中的生物遗传学概念 由于遗传算法是由进化论和遗传学机理而产生的直接搜索优化方法;故而在这个算法中要用到各种进化和遗传学的概念. 首先给出遗传学概念、遗传算法概念和相应的数学概念三者之间的对应关系.这些概念如下: (2)遗传算法的步骤 遗传算法计算优化的操作过程就如同生物学上生物遗传进化的过程,主要有三个基本操作(或称为算子):选择(Selection)、交叉(Crossover)、变异(Mutation). 遗传算法基本步骤主要是:先把问题的解表示成“染色体”,在算法中也就是以二进制编码的串,在执行遗传算法之前,给出一群“染色体”,也就是假设的可行解.然后,把这些假设的可行解置于问题的“环境”中,并按适者生存的原则,从中选择出较适应环境的“染色体”进行复制,再通过交叉、变异过

浅谈云计算任务资源调度

INFORMATION TECHNOLOGY 信息化建设摘要:云计算是互联网时代重要的发展成果,同时作为当前全球信息技术经济发展的潮流,正在 对经济增长贡献重要力量。云计算任务资源调度在于处理和研究多服务器同时处理大量任务时的调度 问题,基于QoS如何实现在最短时间内处理最大数量工作任务,提高任务处理效率,优化服务器工作 调度,是云计算处理多任务调度解决的主要问题[1]。 关键词:云计算;任务调度;资源调度;分布式处理 云计算在为使用者或服务者提供高质量服务的同时,还需要保证任务资源处理的公平性[2]。所以如何合理规划服务器个数,如何合理分配资源,如何用更加简便巧妙的算法逻辑提高任务处理效率是云计算处理的核心[3-4]。 本文针对云计算解决大规模,多任务运算问题,论证了从单一服务器到n个服务器同时处理任务的云计算调度过程,模拟仿真云计算调度处理过程。 一、单服务器处理n个任务调度问题 (一)实验仿真模型 服务器1; 任务1,2......n; 当一个服务器服务多项任务时,计算任务等待时间与逗留时间。 (二)数据成员初始化 double arrive_inter //任务到达间隔时间数组 double size[] //任务长度数组 double arrive[] //任务到达时间数组 double start[] //任务开始执行时间数组 double end[] //任务结束时间数组 double wait[] //任务等待时间数组 double stay[] //任务逗留时间数组 (三)任务分配 (a)如果当前时间time[i]大于作业到达时间arrive[i],则: start[i]=time[i]; time[i]=time[i]+size[i];//更新当前时间 (b)如果当前时间time[i]小于或等于作业到达时间arrive[i],则: start[i]=arrive[i]; time[i]=arrive[i]+size[i]; //更新当前时间 (四)任务等待时间wait[i]和逗留时间stay[i]计算 (a)wait[i]=start[i]-arrive[i];//任务i的等待时间 (b)stay[i]=end[i]-arrive[i];//任务i的逗留时间 二、n个服务器同时处理多n个任务调度问题 (一)实验仿真模型 服务器1,2......n; 任务1,2......n; 当多个服务器同时服务多项任务时,寻找最小剩余时间的服务器索引。 (二)数据成员初始化 int jobnumber=n;//任务个数 int servernumber=n;//服务器个数 d o u b l e r e m a i n b u s y[]=n e w d o u b l e[s e r v e r n u m b e r];//存储每个服务器当前执行作业的剩余时间。 (三)求空闲服务器索引(idleindex) int index=-1; 通过for循环,分别判断n个服务器的当前执行作业的剩余时间是否为0,如果有则返回该服务器的index,若无空闲服务器则返回-1。 (四)求最小剩余时间的服务器索引(min_runtime) int min=remainbusy[0];int index=0; 使min依次和后面的服务器剩余时间比较,返回剩余时间最短的服务器index。 三、未来实际应用 云计算任务调度在未来医疗,GPS定位,交通信号处理等都具有重要意义,如何实现将大规模数据任务分块处理,实现多线程任务调度,减小运算时间,提高运算效率是云计算在未来实际应用中需要解决和提高的重要方面。比如,在人工智能方面的无人驾驶技术,当无人驾驶汽车在行驶过程中遇到前方有障碍物,如何在最短和最安全的时间范围内通过信号处理反馈给汽车是解决问题的关键,这时可利用云计算技术,从多个维度计算汽车与障碍物的距离,通过多个服务器进行高性能计算,从而最大程度减小反应时间,及时反馈给汽车,实现障碍物避让。 四、结语 从调度过程分析可以得出结论,在如何提升云计算服务效率的问题上,重在提升如何减少任务等待时间、逗留时间以及寻找空闲服务器索引和最小剩余时间服务器索引,从而实现模拟任务效率的最高实现,仿真实验验证了从单服务器到多服务器任务分配的过程,所以在处理服务器与任务调度的过程中,需要先分析在哪些方面可以提高处理效率,然后再对其进行深入研究或者算法优化,从而保证运算效率的优先性。H 参考文献 [1]左利云,左立峰.云计算中基于预先分类的调度优化算法[J].计算机工程与设,2012,33(4):1357-1361. [2]苏淑霞.面向云计算的任务调度算法研究[J].安徽大学学报,2014,38(05):24-3. [3]邹永贵,万建斌.云计算环境下的资源管理研究[J].数字通信,2012(4):39-43. [4]柳兴.移动云计算中的资源调度与节能问题研究[D].2015. (作者单位:河北农业大学) 浅谈云计算任务资源调度 石金梁 杨勇杰 吴玉亭 ◆ 信息系统工程 │ 2019.7.20129

相关主题