<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Dhruv's Blogs]]></title><description><![CDATA[Dhruv's Blogs]]></description><link>https://blog.dhruvdakoria.com</link><generator>RSS for Node</generator><lastBuildDate>Sun, 12 Apr 2026 00:30:55 GMT</lastBuildDate><atom:link href="https://blog.dhruvdakoria.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Mean and Covariance of Multivariate data with Python]]></title><description><![CDATA[Links to the data files — https://file.io/9KK9gi3MXzpj

Questions
Question 1 — Using the Excel file dataA.xlsx, which contains a 500x3 data matrix (500 data points with 3
attributes), calculate both the mean and the covariance matrix.
Question 2 — Us...]]></description><link>https://blog.dhruvdakoria.com/mean-and-covariance-of-multivariate-data-with-python</link><guid isPermaLink="true">https://blog.dhruvdakoria.com/mean-and-covariance-of-multivariate-data-with-python</guid><category><![CDATA[multivariate ]]></category><category><![CDATA[Data Science]]></category><category><![CDATA[data analysis]]></category><category><![CDATA[statistics]]></category><category><![CDATA[Python]]></category><dc:creator><![CDATA[Dhruv Dakoria]]></dc:creator><pubDate>Sat, 10 Dec 2022 19:40:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1670700876260/0TawCGI8I.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Links to the data files — https://file.io/9KK9gi3MXzpj</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1670700895478/zbSvjCSY_.png" alt="image.png" class="image--center mx-auto" /></p>
<h2 id="heading-questions">Questions</h2>
<p>Question 1 — Using the Excel file dataA.xlsx, which contains a 500x3 data matrix (500 data points with 3
attributes), calculate both the mean and the covariance matrix.</p>
<p>Question 2 — Using the Excel file dataB.xlsx, which contains a 500x10 data matrix (500 data points with 10
attributes), calculate both the mean and the covariance matrix.</p>
<p>Question 3 — The data generated is random and normally distributed with a mean for dataA, dataB and
covariance for dataA and dataB given in meanA.xlsx, meanB.xlsx, covarianceA.xlsx and covarianceB.xlsx
respectively. Briefly explain why your answers are different from the parameters used to generate the
data.</p>
<h2 id="heading-implementation">Implementation</h2>
<p>Below is the python code for calculating mean and covariance matrix -</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> pandas
<span class="hljs-keyword">import</span> numpy
<span class="hljs-keyword">import</span> matplotlib.pyplot

<span class="hljs-comment"># Read dataA.xlsx and dataB.xlsx from excel file</span>
dataA = pandas.read_excel(<span class="hljs-string">"dataA.xlsx"</span>, header=<span class="hljs-literal">None</span>)
dataB = pandas.read_excel(<span class="hljs-string">"dataB.xlsx"</span>, header=<span class="hljs-literal">None</span>)

<span class="hljs-comment"># Convert data to numpy array</span>
datanpyA = pandas.DataFrame.to_numpy(dataA)
datanpyB = pandas.DataFrame.to_numpy(dataB)

<span class="hljs-comment"># Plot the data</span>
matplotlib.pyplot.figure()
matplotlib.pyplot.scatter(datanpyA[:,<span class="hljs-number">0</span>], datanpyA[:,<span class="hljs-number">1</span>], c = <span class="hljs-string">'r'</span>, marker = <span class="hljs-string">'.'</span>)
matplotlib.pyplot.scatter(datanpyB[:,<span class="hljs-number">0</span>], datanpyB[:,<span class="hljs-number">1</span>], c = <span class="hljs-string">'b'</span>, marker = <span class="hljs-string">'.'</span>)

<span class="hljs-comment"># Calculate the mean</span>
meanA = numpy.mean(datanpyA,axis = <span class="hljs-number">0</span>)
meanB = numpy.mean(datanpyB,axis = <span class="hljs-number">0</span>)

<span class="hljs-comment"># Subtract mean from the data</span>
datawithoutmeanA = datanpyA - meanA
datawithoutmeanB = datanpyB - meanB

<span class="hljs-comment"># Calculate covariance (C=X^T.X/(n-1))</span>
covA = numpy.dot(numpy.transpose(datawithoutmeanA), datawithoutmeanA)/(len(datawithoutmeanA) - <span class="hljs-number">1</span>)
covB = numpy.dot(numpy.transpose(datawithoutmeanB), datawithoutmeanB)/(len(datawithoutmeanB) - <span class="hljs-number">1</span>)


<span class="hljs-comment"># Print the Mean</span>
numpy.set_printoptions(suppress=<span class="hljs-literal">True</span>)
print(<span class="hljs-string">"Question 1 Solution:"</span>)
print(<span class="hljs-string">"Mean A =&gt;\n"</span>, meanA)
print(<span class="hljs-string">"Covariance A =&gt;\n"</span>, covA)
print(<span class="hljs-string">"\n------------------------------\n"</span>)
print(<span class="hljs-string">"Question 2 Solution:"</span>)
print(<span class="hljs-string">"Mean B =&gt;\n"</span>, meanB)
print(<span class="hljs-string">"Covariance B =&gt;\n"</span>, covB)
print(<span class="hljs-string">"\n------------------------------\n"</span>)
print(<span class="hljs-string">"Question 3 Solution"</span>)
print(<span class="hljs-string">"The estimate given in mean and covariance excel file is based on population vs what was calulated using python is just a mean and covariance of a sample(data A and data B excel files). \nSince the data is randomly generated from a multivariate normal distribution using meanA and covarianceA for dataA, and meanB and covarianceB for dataB, \nthe resulting mean and covariance from the generated data clouds will not be the exact values as the given starting mean and covariance matrix but will be pretty close."</span>)

matplotlib.pyplot.show()



<span class="hljs-comment">#-------------- OUTPUT ----------------#</span>
<span class="hljs-comment"># Question 1 Solution:</span>
<span class="hljs-comment"># Mean A =&gt;</span>
<span class="hljs-comment">#  [0.34750193 1.02563712 0.80122132]</span>
<span class="hljs-comment"># Covariance A =&gt;</span>
<span class="hljs-comment">#  [[4.0704887  0.1502016  0.26208365]</span>
<span class="hljs-comment">#  [0.1502016  2.56307135 0.01468606]</span>
<span class="hljs-comment">#  [0.26208365 0.01468606 3.18321243]]</span>

<span class="hljs-comment"># ------------------------------</span>

<span class="hljs-comment"># Question 2 Solution:</span>
<span class="hljs-comment"># Mean B =&gt;</span>
<span class="hljs-comment">#  [9.57062029 6.15014874 8.08016477 9.55989208 8.8040749  2.19491256</span>
<span class="hljs-comment">#  0.20634971 4.54942571 0.06659806 4.65575632]</span>
<span class="hljs-comment"># Covariance B =&gt;</span>
<span class="hljs-comment">#  [[ 9.57410499  0.15742552  0.69100599 -0.04315714 -0.15529541  1.12934141</span>
<span class="hljs-comment">#    0.02644636 -0.48654602  0.95636371  0.53327821]</span>
<span class="hljs-comment">#  [ 0.15742552  9.51640579  0.47757067  0.41333501  0.00557376  0.53194456</span>
<span class="hljs-comment">#    0.11100153  0.17033133  0.84105524  1.33915044]</span>
<span class="hljs-comment">#  [ 0.69100599  0.47757067  8.65741988 -0.31162145  0.16556618  0.19225256</span>
<span class="hljs-comment">#    0.18585505  0.4101727   0.22889477 -0.15427328]</span>
<span class="hljs-comment">#  [-0.04315714  0.41333501 -0.31162145 10.27052739  0.2510052   0.34881198</span>
<span class="hljs-comment">#    0.68992571  0.32255801  0.72253427  1.0499889 ]</span>
<span class="hljs-comment">#  [-0.15529541  0.00557376  0.16556618  0.2510052   9.65117562  0.65088712</span>
<span class="hljs-comment">#    0.15264545 -0.16605455  1.35788702 -0.19805019]</span>
<span class="hljs-comment">#  [ 1.12934141  0.53194456  0.19225256  0.34881198  0.65088712 10.91504476</span>
<span class="hljs-comment">#    0.80109036  0.33946519  0.09688857  1.34008328]</span>
<span class="hljs-comment">#  [ 0.02644636  0.11100153  0.18585505  0.68992571  0.15264545  0.80109036</span>
<span class="hljs-comment">#    9.26492074 -0.19919067 -0.21481801  0.85962642]</span>
<span class="hljs-comment">#  [-0.48654602  0.17033133  0.4101727   0.32255801 -0.16605455  0.33946519</span>
<span class="hljs-comment">#   -0.19919067  9.1616525   0.29534256  0.13637128]</span>
<span class="hljs-comment">#  [ 0.95636371  0.84105524  0.22889477  0.72253427  1.35788702  0.09688857</span>
<span class="hljs-comment">#   -0.21481801  0.29534256 10.14780653  1.96290271]</span>
<span class="hljs-comment">#  [ 0.53327821  1.33915044 -0.15427328  1.0499889  -0.19805019  1.34008328</span>
<span class="hljs-comment">#    0.85962642  0.13637128  1.96290271 10.60596324]]</span>

<span class="hljs-comment"># ------------------------------</span>

<span class="hljs-comment"># Question 3 Solution</span>
<span class="hljs-comment"># The estimate given in mean and covariance excel file is based on population vs what was calulated using python is just a mean and covariance of a sample(data A and data B excel files). </span>
<span class="hljs-comment"># Since the data is randomly generated from a multivariate normal distribution using meanA and covarianceA for dataA, and meanB and covarianceB for dataB, </span>
<span class="hljs-comment"># the resulting mean and covariance from the generated data clouds will not be the exact values as the given starting mean and covariance matrix but will be pretty close.</span>
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Fishers Linear Discriminant Analysis with Python]]></title><description><![CDATA[Data file used in the example: https://file.io/86F7M1iYqVUt

Question
Using the multivariate data in the file fld1.xlsx:
(a) determine the discriminant line found by Fishers Linear Discriminant.
(b) Plot both the data and the discriminant line on a s...]]></description><link>https://blog.dhruvdakoria.com/fishers-linear-discriminant-analysis-with-python</link><guid isPermaLink="true">https://blog.dhruvdakoria.com/fishers-linear-discriminant-analysis-with-python</guid><category><![CDATA[Python]]></category><category><![CDATA[Data Science]]></category><category><![CDATA[data analysis]]></category><category><![CDATA[big data]]></category><category><![CDATA[Machine Learning]]></category><dc:creator><![CDATA[Dhruv Dakoria]]></dc:creator><pubDate>Sat, 10 Dec 2022 19:27:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1670700041461/Qhiv5oVle.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Data file used in the example: https://file.io/86F7M1iYqVUt</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1670700055347/SjqStv1z1.png" alt="image.png" class="image--center mx-auto" /></p>
<h2 id="heading-question">Question</h2>
<p>Using the multivariate data in the file fld1.xlsx:
(a) determine the discriminant line found by Fishers Linear Discriminant.
(b) Plot both the data and the discriminant line on a scatter plot
(c) Using this line, determine the class of each of the data points in the dataset, assuming that the
threshold is 0 (i.e. positive values are in one class and negative values in the other).
(d) Determine what percentage of data points are incorrectly classified.
NOTE: The first 2 columns in fld1.xlsx are data columns. The third column is the class to which each data
point belongs.</p>
<h2 id="heading-python-implementation">Python Implementation</h2>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
<span class="hljs-keyword">import</span> matplotlib.pyplot

<span class="hljs-comment"># read data from excel file fld1 and create a scatterplot</span>
fld1 = pd.read_excel(<span class="hljs-string">"fld1.xlsx"</span>, header=<span class="hljs-literal">None</span>)
fld1_np = pd.DataFrame.to_numpy(fld1)
output_arr = fld1_np[:,<span class="hljs-number">2</span>]
<span class="hljs-comment"># print(output_arr)</span>

fld1_np_1 = pd.DataFrame.to_numpy(fld1.head(<span class="hljs-number">300</span>))
fld1_np_0 = pd.DataFrame.to_numpy(fld1.tail(<span class="hljs-number">200</span>))

fld1_class_1 = fld1_np_1[:, :<span class="hljs-number">2</span>]
<span class="hljs-comment"># print(fld1_class_1)</span>

fld1_class_0 = fld1_np_0[:, :<span class="hljs-number">2</span>]
<span class="hljs-comment">#print(fld1_class_0)</span>

data_X = np.concatenate((fld1_class_1,fld1_class_0))
<span class="hljs-comment">#print(data_X)</span>

matplotlib.pyplot.figure()
matplotlib.pyplot.scatter(fld1_class_1[:,<span class="hljs-number">0</span>], fld1_class_1[:,<span class="hljs-number">1</span>], c = <span class="hljs-string">'r'</span>, marker = <span class="hljs-string">'.'</span>)
matplotlib.pyplot.scatter(fld1_class_0[:,<span class="hljs-number">0</span>], fld1_class_0[:,<span class="hljs-number">1</span>], c = <span class="hljs-string">'b'</span>, marker = <span class="hljs-string">'.'</span>)


<span class="hljs-comment"># Calculate the mean</span>
class1_mean = np.mean(fld1_class_1,axis = <span class="hljs-number">0</span>)
class0_mean = np.mean(fld1_class_0,axis = <span class="hljs-number">0</span>)

<span class="hljs-comment"># Subtract mean from the data</span>
class1_mc = fld1_class_1 - class1_mean
class0_mc = fld1_class_0 - class0_mean

<span class="hljs-comment"># Calculate covariance (C=X^T.X/(n-1))</span>
class1_cov = np.dot(class1_mc.T, class1_mc)
class0_cov = np.dot(class0_mc.T, class0_mc)
<span class="hljs-comment">#print(class1_cov)</span>
<span class="hljs-comment">#print(class0_cov)</span>

<span class="hljs-comment">#implement fisher's linear discriminant w = Sw^-1*(u1 - u2)</span>
Sw = class1_cov + class0_cov
w = np.dot(np.linalg.inv(Sw),(class1_mean - class0_mean))

print(<span class="hljs-string">"Fisher's Linear Discriminant point is: \n"</span>,w)
print(w[<span class="hljs-number">0</span>])
print(w[<span class="hljs-number">1</span>])
matplotlib.pyplot.axline((<span class="hljs-number">0</span>,<span class="hljs-number">0</span>),w,c=<span class="hljs-string">'black'</span>,linestyle=<span class="hljs-string">'--'</span>)

<span class="hljs-comment"># calc slope and y-intercept to create a discriminant line</span>
thresh = <span class="hljs-number">0</span>
slope_1 = -w[<span class="hljs-number">0</span>]/w[<span class="hljs-number">1</span>]
y_intercept = thresh/w[<span class="hljs-number">1</span>]
print(<span class="hljs-string">"y-intercept is "</span>, y_intercept)

matplotlib.pyplot.axline((<span class="hljs-number">0</span>,y_intercept),slope = slope_1,c=<span class="hljs-string">'green'</span>,linestyle=<span class="hljs-string">'--'</span>)

<span class="hljs-comment"># prediction and error calculation</span>
prediction = (np.sign(np.dot(w,data_X.T) + thresh) + <span class="hljs-number">1</span>)/<span class="hljs-number">2</span>
error = np.sum(abs(prediction - output_arr))
print(<span class="hljs-string">"\nnumber of errors = "</span>,error)
print(<span class="hljs-string">"\npercentage of errors = "</span>,(error/<span class="hljs-number">500</span>)*<span class="hljs-number">100</span>,<span class="hljs-string">"%"</span>)

<span class="hljs-comment"># Q = np.squeeze(data_X[error])</span>

<span class="hljs-comment"># matplotlib.pyplot.scatter(Q[:,0],Q[:,1], c = 'g', marker = 'o')</span>

matplotlib.pyplot.show()
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Unlock your team’s creativity — Summary]]></title><description><![CDATA[Video Tutorial Link: Unlock your team’s creativity

In the video tutorial, Lisa Bodell explains how with increased automation of tasks in the modern workplace, the ability to think creatively is becoming more and more valuable. She shares ideas on ho...]]></description><link>https://blog.dhruvdakoria.com/unlock-your-teams-creativity-summary</link><guid isPermaLink="true">https://blog.dhruvdakoria.com/unlock-your-teams-creativity-summary</guid><category><![CDATA[creativity]]></category><category><![CDATA[problem solving skills]]></category><category><![CDATA[team]]></category><category><![CDATA[business]]></category><category><![CDATA[work]]></category><dc:creator><![CDATA[Dhruv Dakoria]]></dc:creator><pubDate>Fri, 09 Dec 2022 22:45:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1670625588552/WzrlsOKx7.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Video Tutorial Link: <a target="_blank" href="https://www.linkedin.com/learning/unlock-your-team-s-creativity/why-creativity-is-essential-and-accessible?collection=urn%3Ali%3AlearningCollection%3A6569702747589890048&amp;u=56982905">Unlock your team’s creativity</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1670625119354/ujEAjSeVu.png" alt="image.png" /></p>
<p>In the video tutorial, Lisa Bodell explains how with increased automation of tasks in the modern workplace, the ability to think creatively is becoming more and more valuable. She shares ideas on how anyone can be creative if they have the necessary tools and an open mind. She also teaches how to change our routine and outlook, apply creative solutions to everyday problems at work, and utilise creative thinking to spot business and team growth opportunities. These methods can help a team become more adaptable under duress or possibly generate the next revolutionary idea.</p>
<h1 id="heading-learn-key-creative-techniques">Learn Key Creative Techniques</h1>
<h3 id="heading-shake-up-your-setting">Shake up your setting</h3>
<p>Innovative thinking is stifled by a predictable routine. This method offers four straightforward adjustments to inspire original thought to assist people in breaking free of their daily work routine. First adjustment involves changing the venue of your next meeting like using a coffee shop, vendor’s site, or using a different conference room. Second, swap out the standard props by holding the meeting standing up, or using post-its and markers, serving different snacks or setting up music to maintain a positive atmosphere. Third, alter the regular schedule to surprise folks just enough to keep them interested. This can be done by setting up a puzzle or brainteaser to be completed as a group, adjourning the meeting early ot reserving an escape room adventure. Finally, switch up the usual presenters by inviting a special visitor who could add expertise or experience to the discussion.</p>
<h3 id="heading-guide-teams-in-to-a-creative-mindset">Guide teams in to a creative mindset</h3>
<p>When a brainstorming session stalls, using a method called “Forced Connections” can help with bringing in new ideas and adopt a creative mentality. After writing the goal of brainstorming session on the whiteboard, the objective is to select a random item visible to everyone whose characteristics needs to be captured. Next, assign groups and ask them to “tie these characteristics back to the goal”.</p>
<h3 id="heading-asking-better-questions">Asking better questions</h3>
<p>Killer questions are incredibly powerful since they enhance the likelihood that the person you are asking will act on whatever it is you are asking them to do by just posing a question that demands thoughtful reflection. Killer questions are “provocative, open ended and approaches the subject either positively or negatively” like asking “what do you hate most about our service?”. They train the team to approach audiences in unorthodox ways and ask the kinds of questions that result in actual innovation and problem-solving.</p>
<h3 id="heading-rethink-your-offerings">ReThink your offerings</h3>
<p>Finding fresh potential in your current goods or services might be somewhat difficult. However, re-evaluating your product offers may reveal untapped markets and fresh sources of income. “ReThink” helps you adopt the attitude required to unlock that potential.</p>
<h3 id="heading-give-wild-ideas-a-chance">Give wild ideas a chance</h3>
<p>Before we can think of a truly disruptive idea, we must be receptive to unusual and even startling ideas. PPCO (Pluses, Potentials, Concerns and Overcome) technique enables us to overcome our innate tendency to be pessimistic so that we can truly give crazy ideas a shot. Additionally, it offers a structure for respectful criticism of those beliefs and reorients our attention from an idea’s drawbacks to its possible benefits.</p>
<h1 id="heading-apply-creativity-to-work-challenges">Apply creativity to Work Challenges</h1>
<h3 id="heading-bite-sized-problem-solving">Bite-sized problem solving</h3>
<p>It’s a frequent misconception that innovation is solely about creating disruptive, ground-breaking new products, but it can also refer to continuously enhancing products that are already on the market. “Plus three, minus three” is a method that can assist a team in innovating in modest but significant ways. By simply adding or removing features, it enables you to acquire more value from your current goods or services. By dissecting it into its component elements, you can spot chances for true innovation that are both immediate and manageable.</p>
<h3 id="heading-convert-pains-to-gains">Convert pains to gains</h3>
<p>The “pain to gain technique” offers a tried-and-true strategy for resolving employee pain concerns even though it was created to assist workers in adopting the mindset of their consumers. The idea is to take advantage of pain by noting down the pain points on the left column and noting potential solutions to address them on the right column.</p>
<h3 id="heading-40-new-opportunities">40 new opportunities</h3>
<p>The 40 new opportunities technique inspired from the problem-solving methodology known as TRIZ, or Theory of Inventive Problem Solving is the roadmap for a business could use to continually reinvent its goods and services. A Russian patent examiner who examined 400,000 patent applications in search of similarities between the problems and the inventors’ approaches to solving them created TRIZ and concluded that each discovery might be linked to one of 40 principles.</p>
<h3 id="heading-find-your-next-big-partnership">Find your next big partnership</h3>
<p>Partnering with a complimentary business can help increase reach, sharpen customer insights, provide company with important intellectual property, new clients, or even an expertise in a lacking field. “Within, Adjacent, and Beyond” technique can help with finding ideal strategic partners. Remember that the most promising partnerships are win-win relationships. In a group setting identify top 3 partners and develop ideas for joint ventured goods or services with one of those partners. Take the team through this at least once per year to uncover collaborators who can partake in the risk-reward equation and accelerate the achievement of set goals.</p>
<h1 id="heading-master-the-art-of-creative-thinking">Master the Art of Creative Thinking</h1>
<h3 id="heading-turning-the-impossible-into-possible">Turning the impossible into possible</h3>
<p>The underlying denominator for businesses where innovation is preached but not done is that the cultures are too negative for large ideas to even be floated. From Impossible to Possible is a framework that aims to get the issues outlined and as a group think critically about how to resolve them. From the 3 categories of impossibilities — industry, customer, or internal impossibility, decide on 1 of the categories and brainstorm as a small group as many impossibilities for 15 minutes as possible. The idea is to outline ways to make the coworkers’ seemingly hard tasks possible for the next 15 minutes of discussion. By describing what cannot occur, a path to what can is opened, and by bringing in other viewpoints to the problem-solving process, norms can be broken, and employee morale is raised.</p>
<h3 id="heading-increase-team-agility">Increase team agility</h3>
<p>Practice accepting constraints rather than letting them become a hindrance. Constraints encourage invention and require focus and improves resourcefulness — getting more done with less. Utilize them to inform the decisions made rather than attempting to eliminate them. A team activity called “wild cards” can improve their ability to be resourceful in the face of change.</p>
<h3 id="heading-discover-new-revenue-streams">Discover new revenue streams</h3>
<p>Investigate the “where/when/how/who else method” to uncover new applications and sources of income for the firm. This causes people to think in ways that go far beyond the norm. It’s simpler to tame a wild concept than to make a dull one interesting. Additionally, it promotes unrestricted thought, identify new markets, audiences, and sources of income for current offers.</p>
<h3 id="heading-activate-customer-centric-thinking">Activate customer-centric thinking</h3>
<p>Engage your staff in the practice known as “hunting grounds” to discover fresh prospects for the company. The goal of hunting grounds is to identify unmet client wants and areas where the company can outperform its competition. We break down business into categories and answer positives and negatives on each category. It asks participants to write down their ideas in the following format: What do I observe, what does it mean, and what do I do next to learn more? This can identify urgent areas to improve or capitalize on by viewing the company through the eyes of its customers. This can be performed annually or whenever you need to identify new prospects for innovation within the company.</p>
]]></content:encoded></item></channel></rss>