Performance isn't a problem … until it is
Decisions to use a scripting language for a project are often made because one thinks that some language will yield some conveniences: easy to learn; fast time to delivery; less boilerplate; huge ecosystem; makes programmers happier; attract's hot young talent; etc. In an effort to stay on topic, let's put aside these claims.
Performance is seldom identified as a convenience. When the issue is raised, the usual response is the oft quoted "premature optimization is the root of all evil." This tends to be more a thought-ending cliche than anything constructive, but if the discourse survives it's very hard to make the case for performance, especially in with greenfield projects that don't have a good indication of the performance requirements.
The snazzy scripts seem to deliver. You've banged out a lot of value-add without wrestling types, caring about memory, waiting on compilers, curating curly braces, or any of the other silly banalities of outdated programming.
You confidently repeat this process many times. As you progress in your career your projects increase in importance, each successful app compounds the credence that the scriptaculous conveniences are good for you, and good for the company … win-win.
Somehow after a bit of time in production certain things run unacceptably slow. The super convenient scripting language has some fatal flaw like a global interpreter lock or penalty for object creation or method invocation. But the project is in production, the risk of moving off this language is much higher risk than pre-production. Being clever programmers, we come up with workarounds. Usually some means to scale the bottleneck horizontally. They tend to involve some sort of a message bus or queuing system. The phrases NoSQL or reactor-pattern are probably involved somewhere. In the best case dropping down to a C library (hopefully the bindings already exist). You've shifted your problem from slow code to complex code likely with more operational overhead.
Contrived pathology aside, I'm not trying to be cynical. Performance bottlenecks are unknown until they become known. Absence of evidence, no matter how well-sustained, is never evidence of absence … of bottlenecks.
I've been picking on scripting languages, not because they have may have performance issues, but because those who fanatically promote them tend to marginalize the potential dire that is yet to manifest. I.e. fools.
Is your esteem for some language is disproportional to your expertise? The ethical thing to do would be to use it when you are more of an expert. Just because it makes common things trivial doesn't mean it makes hard things easy.
If you know your scripting language has inherent performance issues, using another language may be a good option. Modern fast-ish languages like Java, C# & Go have many of the convenience features of scripting languages, without the resource consumption per unit of work that many scripting languages exhibit.
Does a design artifact prevent a scaling strategy? Make sure you have a plan B in your back pocket. Write things to be rewritten, chances are you didn't get it right. What are your performance limits? Where does your app fall over? Can you throw more hardware at it? How much?
Do you have any estimate of demand? maybe peak load? Design the project for 10x that, plan out what is required to take it to 100x. Maybe 100x or 10x isn't viable, but at least you know.
discourse with fools
Substitute scripting for functional, object-oriented, concurrent, vm-based, dynamically-typed, statically-typed, industry standard, cutting-edge, etc. Substitute language for database or operating system. You get the point.
It can be difficult to argue a subtle subject like hedging bets when confronted with gross generalizations like paradigms or buzzwords. Bring the conversation back down to earth by asking specific questions about actual things that effect the actual project at hand.
Start with the claims. If node.js lets you ship code faster, how much can we tighten the deadline? If mongodb actually is better how will that reflect in our bottom line? Then proceed probing how prepared they are for future road-blocks. Who knows? Honest dialectic may yield unanticipated results.
If someone whips out Knuth's quote on premature optimization, kindly explain that risk mitigation is neither premature nor optimal. Premature implies a lack of anticipation, hedging for the most part is an actionable anticipation. Hedges are made with the understanding that they may not be exercised, value invested but unused is arguable the opposite of optimal. If all else fails point out that they are taking Knuth out of context.
Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3 % …
He then goes on about how good programmers should measure & test performance claims. In summary he advises to be well informed about about your software's behavior and do not ignore the critical. It seems hard to argue that he intended this to be a though ending cliche against hedging performance issues.
At the end of the day, nobody wants unforeseen problems, even the fools that pretend there is no impact. Take comfort in your ignorance. Arrogance, the alternative, is much more damning. Don't be a turkey.