When searching and browsing the web, more and more of the information we encounter is generated or mediated through large language models (LLMs). This can be looking for a recipe, getting help on an essay, or looking for relationship advice. Yet, there is limited understanding of how individuals perceive advice provided by these LLMs. In this paper, we explore people's perception of LLM-generated advice, and what role diverse user characteristics (i.e., personality and technology readiness) play in shaping their perception. Further, as LLM-generated advice can be difficult to distinguish from human advice, we assess the perceived creepiness of such advice. To investigate this, we run an exploratory study (N = 91), where participants rate advice in different styles (generated by GPT-3.5 Turbo). Notably, our findings suggest that individuals who identify as more agreeable tend to like the advice more and find it more useful. Further, individuals with higher technological insecurity are more likely to follow and find the advice more useful, and deem it more likely that a friend could have given the advice. Lastly, we see that advice given in a ‘skeptical’ style was rated most unpredictable, and advice given in a ‘whimsical’ style was rated least malicious—indicating that LLM advice styles influence user perceptions. Our results also provide an overview of people's considerations on likelihood, receptiveness, and what advice they are likely to seek from these digital assistants. Based on our results, we provide design takeaways for LLM-generated advice and outline future research directions to further inform the design of LLM-generated advice for support applications targeting people with diverse expectations and needs.