Reliance on cloud computing infrastructure has increased rapidly worldwide, and increasing amount of resources are provided to users by the cloud providers. User demand provisioning is an important issue for the cloud provider as the number of demands increases while the resources on the data center (DC) and the available bandwidth between DCs remain limited. The provisioning problem has serious implications in real-time scenarios, where the number of demands for the requested service is not known in advance. In this paper, we consider dynamic provisioning of the cloud resources including bandwidth scheduling for user demands where the requested DC resources (e.g. CPUs, storage and network ports) and bandwidth are not known in advance. We assume that the DCs are connected using IP/MPLS-over-WDM networks, which support grooming of the bandwidth from different demands where possible. Increasing the number of demands accepted while reducing the cost is important for providers. We propose and study the effects, of two greedy approaches to solve this problem, on the blocking probability and the total cost, which includes the operating expenditure (OpEx) and capital expenditure (CapEx).