Abstract
Server failures on the cloud introduce acute operational dilemmas as now the cloud management entity needs to handle existing task preservations in addition to new task admissions. These admission and preservation decisions have significant impact on the cloud performance and operational cost, as they impact future system decisions. Should a cloud manager prefer to use resources for new task admissions and increase the risk of dropping an already admitted task in the future? Or should he/she prefer to maintain resources for potential future task preservations at the expense of new task admissions? These dilemmas are even more critical in Distributed Cloud Computing (DCC) due to the small scale of the micro Cloud Computing Center (mCCC). In this paper we will address these questions through the use of Markov Decision Process (MDP) analysis. We will show that even though the problem appears to be rather complicated (as the two decision rules are coupled), our analysis reveals that it can be significantly simplified (as one of the rules is of a trivial form). These results enables us to compose a holistic framework for cloud computing task management.
Original language | English |
---|---|
Pages (from-to) | 25-29 |
Number of pages | 5 |
Journal | Performance Evaluation Review |
Volume | 43 |
Issue number | 3 |
DOIs | |
State | Published - Dec 2015 |
Keywords
- Admission control
- Markov Decision Process
- Task management
- Task preservation