You have been asked to provide advice to Coles’s CEO on developing the best information systemsstrategies that would help Coles to improve its business performance and competitive advantage. Itis essential that you conduct research based on the following suggestions, justify your selection onthe best information systems strategies suitable to Coles. Other relevant information about the casestudy include the following:• Discuss historical background about the company’s IS strategies since it first introduced,including milestones.• Identify and discuss major challenges facing the company in the last five years, includingpresent situation.• Discuss about its current IS strategies and whether these strategies implemented aresuitable in this current business environment.• Research on Coles’s competitor IS strategies and provide comparisons of IS strategiesbetween Coles and Woolworths.• Discuss the Importance of IS/IT strategy management plan in helping Coles achievingcompetitive advantage.• Propose two IS strategies which provide solutions to current company needs/problems. It isimportant that you propose two strategies as a back-up plan should one strategy fail.• Select one or two best strategies that you believe would be successful to Coles. Explain howthe proposed IS strategies and acquisition of information systems would contribute tosuccess of the company.• Propose the best time to introduce these new strategies. Explain why the selection isappropriate.
Consider the following game tree, and assume that the first player is the maximising player: (i) Which move should the first player choose? [2 Marks]
(ii) Assuming that nodes are searched left-to-right using the alpha-beta algorithm, which nodes would not be examined? [6 Marks]
Question 1 A. Let S be the set of all the strings over the alphabet {a, b, c}, that is, S = {a, b, c}*. Let S1 and S2 be subsets of S. Which are the strings that appear both in S1 and in S2 in each of the following cases? i. S1 contains the t alphabetically smallest strings in S, and S2 contains all the strings in S of length t at most. ii. S1 contains the 127 alphabetically smallest strings in S, and S2 contains the 127 B. Prove the following by mathematical induction i. å = = + n i i n n 1 ( 1 ) / 2 ii. å = = + + n i i n n n 0 2 ( 1 )( 2 1 ) / 6 B. Prove that for any integers a and b, if a and b are odd, then ab is odd
Question 2 A. For each of the following languages construct a finite-state automaton that accepts the language. i. { x | x is in {0, 1}*, and no two 0’s are adjacent in x } ii. { x | x is in {0, 1}*, and each substring of length 3 in x contains at least two 1’s } iii. { 1z | z = 3x + 5y for some natural numbers x and y } iv. { x | x is in {0, 1}*, and the number of 1’s between every two 0’s in x is even } B. Find a deterministic finite-state automaton that is equivalent to the finite-state automaton whose transition diagram is given in the figure below C. Find a finite-state automaton that accepts the language L(G), for the case that G = is the Type 3 grammar whose production rules are listed below. D. Prove that the language L = {On : n is a prime number} is not regular.
NAME: _______________________________ |
605.741
Module 14 Quiz
There are 3 questions to answer below:
We learned about text clustering methods for documents by representing each document as a vector of non-stop-words and comparing the similarity of documents using the Tanimoto Cosine Distance metric.
- Write pseudocode that takes as input a corpus (set) of document and creates vectors for each document where the vectors do not contain stop-words and are weighted by the term frequency multiplied by the log of inverse document frequency as described in the course module.
DocumentVectorSet documentVectorSet =
CreateDocumentVectors(documentSet);
- Write pseudocode that takes two document vectors and measures their similarity.
Similarity similarity =
DocumentSimilarity(documentVectorA, documentVectorB);
After performing K-means clusters, let us suppose that we examine the clusters by sight and assign names to them. For example, one cluster may represent documents about sports, another may represent documents about politics, and yet another may represent documents about animals. Let us assume that we assign each cluster a name such as sports, politics, and animals.
Sometimes, words are used in multiple contexts. For example, the word duck is ambiguous. Sometimes it means a waterfowl and would fall into the animal category. Sometimes it is used in politics such as a lame duck congress and would fall into the politics category. Sometime it is used in sports such as the name of a National Hockey League team the Anaheim Ducks and would fall into the sports category. Knowing which context the word is used makes the clustering much better. To understand why, suppose that we had two documents, one with the words duck and water, and the other with the words duck and ice. Without understanding the context of the word duck, our similarity metric may actually find that these documents are similar. However, understanding that when duck appears with water, the word duck probably refers to an animal, whereas when duck appears with ice, the word duck probably refers to sports. With this knowledge, our similarity metric would find these documents not very similar at all.
Suppose we had a library of words that are used in multiple contexts such as:
String[] multiContextWords= {“duck”, “crane”, “book”, …};
Suppose also that we have a multi-dimensional array that shows the multi-context words and common words that are used with them:
String[][] wordContext = {
{“duck (animal)”, “zoo”, “feathers”, “water”, …},
{“duck (sports)”, “hockey”, “Anaheim”, “ice”, …},
{“duck (politics)”, “congress”, “lame”, …},
{“crane (animal)”, “bird”, “water”, …},
{“crane (construction)”, “building”, “equipment”, …},
…};
3. Modify the CreateDocumentVectors() pseudocode from above to take advantage of the multiContextWords[] and wordContext[][] arrays to create better document vectors so that the subsequent call to DocumentSimilarity() will better distinguish contexts.
NAME: _______________________________ |
605.741
Module 14 Quiz
There are 3 questions to answer below:
We learned about text clustering methods for documents by representing each document as a vector of non-stop-words and comparing the similarity of documents using the Tanimoto Cosine Distance metric.
- Write pseudocode that takes as input a corpus (set) of document and creates vectors for each document where the vectors do not contain stop-words and are weighted by the term frequency multiplied by the log of inverse document frequency as described in the course module.
DocumentVectorSet documentVectorSet =
CreateDocumentVectors(documentSet);
- Write pseudocode that takes two document vectors and measures their similarity.
Similarity similarity =
DocumentSimilarity(documentVectorA, documentVectorB);
After performing K-means clusters, let us suppose that we examine the clusters by sight and assign names to them. For example, one cluster may represent documents about sports, another may represent documents about politics, and yet another may represent documents about animals. Let us assume that we assign each cluster a name such as sports, politics, and animals.
Sometimes, words are used in multiple contexts. For example, the word duck is ambiguous. Sometimes it means a waterfowl and would fall into the animal category. Sometimes it is used in politics such as a lame duck congress and would fall into the politics category. Sometime it is used in sports such as the name of a National Hockey League team the Anaheim Ducks and would fall into the sports category. Knowing which context the word is used makes the clustering much better. To understand why, suppose that we had two documents, one with the words duck and water, and the other with the words duck and ice. Without understanding the context of the word duck, our similarity metric may actually find that these documents are similar. However, understanding that when duck appears with water, the word duck probably refers to an animal, whereas when duck appears with ice, the word duck probably refers to sports. With this knowledge, our similarity metric would find these documents not very similar at all.
Suppose we had a library of words that are used in multiple contexts such as:
String[] multiContextWords= {“duck”, “crane”, “book”, …};
Suppose also that we have a multi-dimensional array that shows the multi-context words and common words that are used with them:
String[][] wordContext = {
{“duck (animal)”, “zoo”, “feathers”, “water”, …},
{“duck (sports)”, “hockey”, “Anaheim”, “ice”, …},
{“duck (politics)”, “congress”, “lame”, …},
{“crane (animal)”, “bird”, “water”, …},
{“crane (construction)”, “building”, “equipment”, …},
…};
3. Modify the CreateDocumentVectors() pseudocode from above to take advantage of the multiContextWords[] and wordContext[][] arrays to create better document vectors so that the subsequent call to DocumentSimilarity() will better distinguish contexts.